Skip to content

Run your first campaign

A complete walkthrough from an empty campaign to your first reviewed scores and reaching out. Plan for about 30 minutes of hands-on work spread across a couple of sessions while scoring runs in the background.

You’ll need three things in place. If any are missing, finish the Quick start first.

  • A Company (the project you’re working in).
  • A Product with a description, website, and at least an example or two of good-fit accounts. The product is what the campaign promotes.
  • A handful of accounts in your database — at minimum a few dozen, ideally a few hundred. Import them (Import accounts), build them from Google Maps lookups, or let inbound channels accumulate.

The product needs an approved ICP before scoring can run. If you haven’t generated one, do that first from the product’s detail page — LeadHunter drafts it from your website + example URLs, you review and approve. See ICP and scoring for the field-by-field breakdown.

Campaigns → New campaign:

  • Campaign goal — the first thing you pick, because everything else adapts to it. Defaults to Sales prospecting; pick a different goal (partnership, press, recruiting, customer expansion, win-back, event, research, influencer, investor, renewal) when the intent isn’t a sales pitch. The page hero, the ICP framing, scoring rubric, message drafts, and guardrails all flex per goal. See Outreach reasons and targets.
  • Product — pick the one this campaign promotes. A campaign points to exactly one product, and switching is allowed within the same company.
  • Name — name the audience, not the product. “Berlin bike shops Q2” tells you something; “Spring campaign” doesn’t.
  • Communication language — leave blank to inherit the product default. Set it explicitly when you’re running parallel campaigns of the same product in different markets (one in Spanish for Spain, one in English for the UK).

Save. The campaign opens in draft status with an empty accounts panel.

Three ways to fill the campaign — combine them freely:

  • From a saved filter — best for repeatable plays. Open the campaign’s accounts panel, click Bulk-add from filter, pick a saved filter. The reach estimator tells you how many accounts will land before you commit.
  • By hand — open the panel’s accounts side, search for an account by name or website, click to add. Use the + Add new account shortcut to slot in one specific company you don’t have yet.
  • From an import — accounts created during an import wizard can be routed straight into a campaign.

Bulk-add honours every guardrail in one pass. Two rules are universal across every campaign goal:

  • do_not_contact accounts are hard-blocked (never overridable). GDPR / opt-out sticky.
  • Competitors and personal network accounts are hard-blocked (never overridable).

Everything else depends on the campaign goal:

  • Customers are blocked in sales / press / recruiting / investor / influencer / win-back campaigns unless you flip include_customers. They’re admissible by default in customer expansion / renewal / event / research / partnership campaigns — those are the goals where reaching customers is the point.
  • Suppliers, investors, press, analysts, candidates are soft-blocked in goals that don’t target them, with per-type override flags. A press campaign won’t block press accounts; a recruiting campaign won’t block candidate accounts; etc.

The UI tells you exactly which accounts were blocked, why, and which override (if any) unlocks them. See Outreach reasons and targets for the full per-goal matrix.

For attribution on inbound campaigns, also set the relevant acquisition channel on accounts before adding them — see Track inbound leads. It doesn’t change the campaign mechanics, but it lets you slice “Adwords accounts that responded” later.

Scoring runs in the background. The campaign’s scoring_status advances idle → running → completed, and you can keep working — the page polls the status and updates the score column as rows come back.

What to expect:

  • Per account: a few seconds for the LLM call, plus website discovery + scraping for accounts that arrived without a URL. Most batches of a few hundred finish in 5–15 minutes.
  • Cost: a small fraction of a cent per account on the default Gemini model. Auto-tracked under API costs — you don’t enter it.
  • Reuse: if the account already has a score against this product (from a previous campaign of the same product), the score is reused — no re-scoring cost.

Each account ends with three things attached:

  • ai_score — a 0–10 number.
  • score_labelexcellent (≥8), moderate (5–7), mismatch (<5).
  • score_reasons — 2–4 short, typed justifications (positive / negative / neutral). Read them — they tell you why the AI picked that score, and reading a handful is the fastest way to spot a mis-calibrated ICP.

If the reasons feel off across many accounts, edit the ICP on the product (sector, firmographics, anti-patterns) and rescore. The ICP is what the model anchors on.

If the campaign is large, the bottom of the list is usually noise — accounts scored as mismatch that aren’t worth your review time. Click Remove low-fit on the campaign-accounts panel to clear them in one pass; approved and already-contacted rows are preserved by default. Details: Campaign → Pruning the low-fit tail after scoring.

Then sort by score descending. Skim the top, and decide row-by-row:

  • Approve the account if the AI got it right and you want to reach out.
  • Reject if the AI got it wrong — it’s a mismatch or the timing’s bad.
  • Override the AI score if you want to bump or knock a specific account up or down without rejecting it.

Approvals and rejections aren’t just for you. Once a product has roughly ten reviewed accounts across all its campaigns, LeadHunter starts feeding your most recent approvals and rejections to the scoring model as concrete examples of “what good looks like here” — and the next batch of scores gets sharper. Below that threshold the example URLs on the product remain the only anchor. So treat the review pass as calibration, not just triage.

The cohort dashboard on the home page will eventually attribute responses and customer-close events to the campaign’s accounts; the more accurate your reviewed list, the more honest those numbers are.

Open an approved account, click into the Conversation tab. Five drafting modes are available:

ModeWhen
AI draftFirst outbound message. Generates in the account’s language.
AI continueContinuation. The AI reads the prior history.
Type & translateWrite in your language, see the translated version, send the translation.
Log sentYou already sent it elsewhere — paste the text to record it.
Inbound pasteThe account replied. Paste; AI translates into your reading language.

For each account: pick a contact (or add one if you don’t have it yet), draft, edit until it’s right, send through your normal channel (email, IG, LinkedIn, WhatsApp, phone), and click Log sent. LeadHunter records the message and:

  • Bumps the campaign-account outreach_status to sent.
  • Promotes the account’s lifecycle status from prospect to contacted (forward-only — already-customer accounts don’t regress).
  • Counts the account as initiated in the dashboard funnel.

When replies come in, paste them via Inbound paste. The campaign-account flips to responded; the dashboard funnel counts the account as a response.

A campaign is only half-tracked without the cost side. As you run paid ads, pay agencies, or spend internal hours on this campaign, log each expense to the Costs & expenses panel on the campaign detail — or use the 💸 quick-add button on the campaigns list. LeadHunter then surfaces cost per outreached account, cost per response, and (after a few customer closes) cost per customer on the Stats by product and campaign page.

The full mechanics: Track campaign costs and CAC.

Open the Dashboard (sidebar → home). Three numbers tell you whether the campaign is working:

  • Initiated — outreach started, distinct accounts.
  • Responded — accounts that replied (cohort-attributed, by initiation date).
  • Closed (won) — accounts that converted to customer (cohort-attributed).

Cohort attribution means the rate at the latest day is “low by design” — accounts initiated yesterday haven’t had time to reply yet. Compare windows to each other (last 30d this month vs. last 30d last month) instead of reading the right edge of the chart as gospel.

The Stats page (sidebar → Stats) splits the same numbers per product and per campaign — handy when you’re running several campaigns of the same product and want to see which one is the most efficient.

When the campaign is done:

  • Archive — soft-hides the campaign from the default list while keeping every conversation, score, audit-trail entry, and account link intact. Use this for “completed but still want the history.”
  • Delete — name-confirmed (you type the campaign name to confirm). Cascades to every campaign-account, conversation, and ICP for this campaign. Use only when you’re sure.

do_not_contact opt-outs and customer lifecycle states on accounts stick around — those live on the account, not the campaign.

  • Skipping the ICP review. A draft ICP that’s never approved will still score, but the reasons will read fuzzy. Spend ten minutes editing the ICP before running scoring on a few hundred accounts.
  • Bulk-adding everything you have. The reach estimator exists so you can stop before importing 4,000 accounts and burning Gemini quota on a single test run. Start with a few dozen of the strongest candidates, calibrate, then go wide.
  • Treating reviews as one-and-done. Approvals and rejections feed back into scoring once you’ve passed the ten-reviewed-account threshold for the product. Your first campaign on a new product is the best one to be thorough on.
  • Forgetting the costs side. A campaign’s success isn’t response rate alone — it’s response rate at what cost. Logging spend as you go (or once a week) means CAC is honest when you go to compare campaigns.