Skip to content

Campaign

A Campaign is one outbound initiative against a list of accounts. Every campaign points to exactly one Product, which means it inherits that product’s ICP, default language, and scoring calibration. The campaign is the unit you operate against — bulk-add accounts to it, score them, review the scores, log conversations against them, and measure spend against the outcomes it produced.

Every campaign also carries a goalwhy you’re reaching out. The default is sales prospecting; ten others cover partnership recruitment, press / visibility, investor outreach, recruiting, customer expansion, win-back, and more. The goal reshapes the ICP, the scoring rubric, the message intent, and the guardrails — same product, different intent, different outcome. See Outreach reasons and targets.

Switching a campaign’s goal mid-flight is allowed but the existing scores stay as they were (they were rated against the previous goal’s rubric). When that happens, the campaign detail page shows an orange “rescore to refresh” banner that tells you how many accounts have scores under a different goal and links straight to the scoring page — one click and you’re back in sync.

BucketFields
Identityname, slug (auto-generated from name, collisions suffixed within the project), project, product
Intentgoal (sales / partnership / press / investor / recruiting / customer expansion / win-back / event / research / influencer / renewal — see Outreach reasons and targets)
Scopeaudience_config (saved-filter / manual selection)
Languagecommunication_language (inherits from Product when blank)
Workflowstatus (operator-facing stage), scoring_status (background-job state), archived_at
Cached statisticstotal_accounts, scored_accounts, approved_accounts, outreached_accounts, responded_accounts, response_rate
AI surfaceicp (inherited from product, read-only on the campaign), score_distribution (histogram of fit scores, see below)
Provenancecreated_by, created_at, updated_at

The cached statistics are maintained as you operate — outbound logs bump outreached_accounts, inbound pastes bump responded_accounts, reviews bump approved_accounts. The numbers you see on the campaigns list and on the campaign detail header come from these columns, so they’re fast even on large projects.

Each campaign carries its own communication language. It drives the language of every piece of AI-generated content the campaign produces — the ICP summary, the reasons attached to each fit score, and the outbound message drafts. Leave it blank to inherit the language set on the Product; set it explicitly to override the product default for this campaign only. This lets you run one campaign of the same product in Spanish for the Spain market and another in English for the UK without duplicating the product.

When sending a message, language resolution checks the account first, then walks the chain Account → Campaign → Product → User → English. So an account whose own language is set to Catalan always reads Catalan, regardless of which campaign reaches them.

A campaign moves through a workflow status:

  1. draft — Created, no accounts yet.
  2. icp_definition — Verify the inherited ICP, adjust as needed, approve.
  3. database_config — Decide where accounts come from (saved filter, manual list, import).
  4. scoring — Each account in scope gets a per-product fit score.
  5. outreach — You’re actively reaching out.
  6. active — Ongoing campaign with mixed reach-out + reply work.
  7. paused — Temporarily on hold; everything is preserved, you’ve just stepped away.
  8. completed — Campaign is wrapped up.

The status is an operator hint, not a hard state machine — LeadHunter doesn’t refuse to score a campaign because its status says “outreach”. You can move freely between stages, and the name is shown to your team for context, not enforced.

Archive is orthogonal: any campaign can be soft-hidden via the archive action (sets archived_at). Archived campaigns disappear from the default list but stay in the database with every conversation, score, expense, and audit-trail entry intact. Use archive for “campaign is done but I want the history”; use delete (name-confirmed) only when you’re sure.

Renaming a campaign is inline on the detail page — click the title, type, save. No extra modal.

Separate from the workflow status above, every campaign carries a scoring_status that reflects the background scoring job:

  • idle — nothing in progress.
  • running — the score-leads task is processing this campaign’s accounts. The campaign page polls and updates as rows come back.
  • completed — the most recent scoring batch finished successfully.
  • failed — the most recent batch hit an error. The error is recorded and the operator can retry.

Scores reuse across campaigns of the same product — see ICP and scoring → Score caching.

Once scoring has run, the campaign page shows a 10-bucket histogram of fit scores (0–1, 1–2, …, 9–10). It’s the fastest way to read whether the ICP is well-calibrated — a healthy distribution skews bimodal (excellent + mismatch piles, modest moderate middle); a flat or all-moderate distribution usually means the ICP isn’t sharp enough yet. See ICP and scoring → Why are my first batch of scores reading vague.

The join row between a campaign and an account is a CampaignAccount. It carries per-campaign workflow data — the fields you’ll touch most as an operator:

FieldWhat it holdsWhere it moves
review_statusHave you, the operator, vetted this AI-scored account yet?pendingapproved / rejected on the campaign’s review queue.
outreach_statusWhat’s happening with outbound?not_starteddraftscheduledsentbounced / responded. Forward-only.
override_scoreOptional manual score that supersedes the AI score for this campaign only. The Product-level AI score doesn’t move.Set inline from the row.
contact_name / contact_title / contact_email / contact_linkedinWhich specific person the campaign’s outreach is aimed at, when the underlying account has multiple contacts.Picked from the account’s Contact list when you start a thread.
tagsFree-form per-campaign labels — “Q2 priority”, “warm intro”, etc.Edited on the row.
notesPer-campaign notes that don’t belong on the account itself.Edited on the row.

The underlying AI fit score lives on the Product, not on the CampaignAccount, so adding an account to a second campaign of the same product doesn’t trigger a re-score.

The day-to-day loop runs through these CampaignAccount fields:

  1. Scoring fills ai_score for every row when the campaign first scores.
  2. You review — sort by score, read the AI’s reasons, mark each as approved or rejected. Or set override_score if the AI was clearly off.
  3. For approved rows you kick off outreach — open the Conversation tab, pick a contact, draft and send. Each Log sent flips outreach_status to sent and counts the account as initiated in the dashboard funnel.
  4. When the lead replies, paste the inboundoutreach_status flips to responded and the dashboard’s response cohort gets credit.

The campaign detail page surfaces all four stages — review queue, accounts panel, conversation log, costs panel — so the loop stays inside one screen.

Approvals and rejections feed back into scoring

Section titled “Approvals and rejections feed back into scoring”

Every approve or reject you mark is a calibration signal. Once a product crosses ~10 reviewed accounts across all its campaigns, the most recent approvals + rejections start flowing into the scoring prompt as concrete examples. See ICP and scoring → The feedback loop.

This makes the first campaign on a new product the most important one to be thorough on the review step — every decision shapes the next several hundred scores.

Once scoring finishes, the bottom of the list is usually noise — accounts the model flagged as mismatch (score < 5.0) that aren’t worth your outreach time. The campaign-accounts panel has a Remove low-fit button next to the In this campaign count that clears them in one pass.

The flow is two-step: clicking the button runs a dry-run preview and shows the count it would remove, then a confirm dialog. Nothing is deleted until you confirm.

What it targets: only the mismatch band (the lowest tier). Moderate and excellent accounts are never touched. Unscored accounts are also left alone — they haven’t been judged yet, so the model has no opinion to act on.

What it keeps by default, even when they’re scored as mismatch:

  • Accounts you’ve manually approved (review_status='approved'). Your judgment beats the model’s — if you’ve already vetted the account, the AI’s “mismatch” call is the wrong one to honour. The preview dialog tells you how many were kept on this axis.
  • Accounts where outreach has already started — any outreach_status other than not_started (draft, scheduled, sent, bounced, responded). Removing them would orphan the conversation thread. The preview surfaces this count separately so you know exactly what’s being skipped and why.

Both defaults can be turned off if you really want a hard prune, but the safe defaults are the right call almost always.

The removal is scoped to this campaign only. The same account in a second campaign of the same product is untouched — and the underlying Product-level score isn’t deleted either, so if you later add the account to a new campaign with a tightened ICP and rescore, the row gets a fresh chance.

This is a destructive operation (no soft-delete, no archive — the CampaignAccount rows are gone). The account itself, its contacts, and any conversation messages on other campaigns are unaffected. If you removed a row by accident, re-add it from the Available to add column.

Bulk-adding accounts to a campaign honours every protection LeadHunter knows about, in one pass. The full matrix depends on the campaign’s goal — a Press campaign doesn’t block press accounts, a Recruiting campaign doesn’t block candidate accounts, a Customer expansion campaign treats customers as the target audience. See Outreach reasons and targets for the per-goal table.

Two invariants survive every goal:

  • do_not_contact accounts are always blocked. GDPR / opt-out is sticky regardless of campaign intent. To re-engage someone, change their status manually first.
  • competitor and personal_network accounts are always hard-blocked. Never overridable, in any goal.

Everything else flexes per goal: customer status is admissible in expansion / renewal / event / research / partnership goals and blocked elsewhere (overridable with include_customers); the supplier / investor / press / analyst / candidate soft-blocks each turn off for the goal that targets that type.

When the UI shows you blocked accounts after a bulk-add, it groups them by reason and tells you exactly which flag flips them in — phrased in terms of this campaign’s goal. A bulk-add into a Sales campaign that hit press accounts will tell you “this Sales campaign soft-blocks press accounts; pass include_press=true to override, or switch the goal so press is the target audience”. A Press campaign would not produce that message because press accounts aren’t blocked there.

Guardrails apply at the moment of bulk-add. Once an account is in a campaign, logging individual outbound messages to it doesn’t re-run these checks (DNC is the exception — DNC is enforced at message-send time too).

Every campaign carries a ledger of expenses — Adwords spend, agency fees, internal labor hours, software, content, events. Add entries from the campaign detail page or via the quick-add button on the campaigns list. The campaign then computes cost-of-acquisition:

  • Cost per outreached account — total spend ÷ accounts you’ve reached out to.
  • Cost per responded account — total spend ÷ accounts that have replied.
  • Cost per closed-won customer — shown on the Stats by product and campaign page; cohort-attributed using the last-30-day window.

Mixed currencies show per-currency totals without auto-conversion — LeadHunter doesn’t carry an FX layer. CAC is only computed when expenses share one currency; otherwise the UI surfaces a per-currency breakdown and explains the missing ratio.

Auto-tracked API costs (LLM tokens, Google Maps calls, Tavily searches) sit in their own card on the same panel: USD total + a per-model breakdown of which model spent what. It’s kept out of the CAC math because it’s in USD and your user-entered expenses may not be, but the two cards together answer the “what did this campaign actually burn?” question. Per-account enrichment cost is exposed on the account detail.

See Track campaign costs and CAC.