How LeadHunter works
LeadHunter is opinionated about how you use it. This page lays out the intended flow end-to-end so you can see what each phase looks like, what to expect at the end of it, and the rhythm you’ll settle into after the first cycle.
The Quick start walks you through the first fifteen minutes mechanically. This page is the bigger picture around it — read it before, after, or in parallel.
The big picture
Section titled “The big picture”Phase 1: Set up ──► Company · Product · ICP │ ▼Phase 2: Build database ──► Accounts (deduplicated, tagged) │ ▼Phase 3: Score and triage ──► Reviewed campaign list │ ▲ │ │ feeds back into scoring ▼ │Phase 4: Outreach ──► Conversations, status auto-promoted │ ▼Phase 5: Pipeline ──► Status transitions, dashboard funnel │ ▼Phase 6: Costs and CAC ──► Spend logged, CAC per outcome │ ┌───────────────┘ ▼ (loop weekly)The first three phases are mostly one-time setup work. Phases 4–6 are where you’ll live day-to-day, with periodic returns to phases 2 and 3 as you add new accounts and refine your targeting.
Skip cases — when phases compress
Section titled “Skip cases — when phases compress”Not every starting point is the same. A few common patterns:
- You already have a CSV from your CRM. Phase 2 collapses to a single import. The dedupe stack handles the heavy lifting.
- You’re an agency running a client brief. Phase 1’s ICP is half-written for you — paste the brief into the Product description and Generate ICP from there. Often a single approve-or-revise pass instead of multiple iterations.
- You’re picking back up a Product from a previous campaign. Phases 1 and 3’s calibration are already done — the existing scores reuse for any new campaign of that Product (see ICP and scoring → Score caching). You drop straight into Phase 2 to add new accounts and Phase 4 to start reaching out.
The phases are a complete framework, not a mandatory sequence — skip and revisit as your situation warrants.
Phase 1 — Set up your workspace
Section titled “Phase 1 — Set up your workspace”Goal: A Company, a Product, and an ICP good enough to start scoring.
Steps:
- Create a Company. One per business; agencies run one per client.
- Add a Product — name, description, website URL, default communication language.
- Add 2–3 example good URLs and 2–3 example bad URLs to the Product. This is the single biggest lever on scoring quality — don’t skip it.
- Generate the ICP from the Product. Review the draft, edit anything that’s off, save.
- (Optional) Define custom fields for whatever your business tracks beyond the standard fields.
- (Optional) Invite teammates.
Expected outcome: An empty Accounts list, but everything downstream is calibrated. Scoring works as soon as accounts arrive.
Common pitfalls:
- Vague Product description. The AI can’t write a sharp ICP from one-line positioning. Two paragraphs is the minimum.
- Skipping example URLs. Scoring quality plateaus without them.
- Trying to cover two audiences with one Product (e.g. enterprise + SMB). Use two Products with two ICPs instead — see Company and Product.
Phase 2 — Build your account database
Section titled “Phase 2 — Build your account database”Goal: A starter set of accounts to work with. Anywhere from 50 to a few thousand, depending on how broad your market is.
Inputs:
- Existing CSV/XLSX from your CRM, Apollo, ZoomInfo, etc.
- Google Maps text searches (“bike shops in Berlin”) for geographic discovery.
- Individual URLs or domains for known leads via the Account lookup.
Use any of them — or all three. See Import accounts for the flows in detail.
Expected outcome: A populated Accounts list, deduplicated against itself.
Common pitfalls:
- Worrying about duplicates. Don’t. Every import path deduplicates silently before writing. Re-importing the same list twice is safe — see Merge duplicates.
- Forgetting to tag existing customers. Mark them
customer(lifecycle status) andclient(relationship type) before running campaigns, so the outreach guardrails can protect them. See Account → Relationship types. - Importing every contact in your CRM. Quality over quantity. Start narrower than you think.
Phase 3 — Score and triage
Section titled “Phase 3 — Score and triage”Goal: A ranked, reviewed list of strong-fit accounts ready for outreach.
Steps:
- Create a Campaign pointing at your Product.
- Bulk-add accounts — from a saved filter, an import, or by hand.
- Wait for scoring. Each account gets a 0–10 fit score plus a label (
excellent≥8,moderate5–7,mismatch<5) and 2–4 typed reasons (positive / negative / neutral). - Sort by score. Open the top accounts and read the AI’s reasons.
- Approve strong fits. Reject ones the AI missed on. Set an override score where it’s clearly off.
Expected outcome: A campaign with 10–50 reviewed accounts ready to reach out to.
The campaign page also shows a 10-bucket score-distribution histogram (0-1 through 9-10) once scoring runs — a healthy distribution is bimodal (excellent + mismatch piles with a modest moderate middle); a flat or all-moderate shape usually means the ICP or example URLs need work. See Campaign → Score distribution.
If scoring fails partway through. The campaign’s scoring_status flips to failed and stops processing further accounts. Scores from accounts that completed before the failure are kept — re-running scoring picks up where it left off and only scores accounts that don’t yet have one. If you see fewer scored accounts than expected, check scoring_status on the campaign before assuming the worst.
Common pitfalls:
- Trusting the number, not the reasons. The score is a starting point; the reasons are the substance. A 9/10 for the wrong reasons is still wrong.
- Reaching out to unreviewed accounts. At least skim the top of the list first — you’ll spot patterns the AI missed.
Phase 4 — Outreach and conversations
Section titled “Phase 4 — Outreach and conversations”Goal: First messages out, replies pasted in, history kept consistent.
Steps:
- Open an approved campaign-account.
- Pick a contact (or add one).
- AI Draft → review → edit → send through your normal channel (email, LinkedIn, IG, WhatsApp, phone).
- Log Sent in the conversation panel so LeadHunter knows it went out.
- When the account replies, Inbound Paste the reply text. The AI translates it into your language if needed.
- Repeat with AI Continue for follow-ups.
LeadHunter is a communication log, not a sending platform — you send through your own channels and paste the exchange in so scoring, history, and team visibility all stay in sync. See Messaging for the five message modes in detail.
The conversation thread lives in two places: the campaign-account view (just this campaign’s messages with the account) and the account detail page (every conversation across every campaign that account has been in). When an account moves to a new campaign, its history follows them.
Expected outcome: Each reached-out account has a conversation thread. The account’s status auto-promotes from prospect to contacted on the first outbound; responded flips on when a reply lands.
Common pitfalls:
- Forgetting to Log Sent. If LeadHunter doesn’t know you reached out, outreach status doesn’t move and the dashboard funnel doesn’t see the activity.
- Drafting in a language the account doesn’t speak. Let the system pick.
- Cold-pitching a
do_not_contactaccount. Hard-blocked with a clear error — there’s no force flag. Resolve the status mismatch instead.
Phase 5 — Pipeline and ongoing operations
Section titled “Phase 5 — Pipeline and ongoing operations”Goal: Track relationships as they evolve.
Steps:
- Move status as deals progress:
contacted→in_negotiation→customer(orlost). - Add relationship types as they become true — a prospect that converts becomes a
client; the same account might also be apartnerorpresscontact later. - Mark opt-outs as
do_not_contact. This is sticky and survives merges so opt-outs stay opt-outs. - Watch the dashboard outreach funnel — contacts initiated, responded, closed — daily, weekly, monthly. Treat it as your activity gauge.
Expected outcome: Your account database stays current with reality. The funnel tells you whether outreach effort and conversion are trending right.
Common pitfalls:
- Letting status drift. If your accounts say
prospectwhen half of them are customers, every guardrail and every report drifts with them. - Skipping the monthly dedupe pass. New imports introduce new duplicate candidates; resolve them before they accumulate.
- Importing an existing customer book without the pre-existing flag. The dashboard’s win count balloons, then you have no clean baseline to measure new campaign wins against.
Phase 6 — Costs and CAC
Section titled “Phase 6 — Costs and CAC”Goal: Pair every outreach outcome with what it cost so the numbers you read are honest.
Steps:
- Log spend as it accrues — Adwords invoices, agency fees, internal hours, software subscriptions, content production, event costs. Each entry goes against a specific campaign with an amount + currency + date + vendor.
- The campaign’s Costs & expenses panel surfaces total spend (per currency), cost per outreached account, and cost per responded account in real time.
- The Stats by product and campaign page rolls up cost per closed-won customer across the last 30 days, both per campaign and per product.
Auto-tracked API costs (LLM tokens, Google Maps calls, Tavily searches) are surfaced alongside user-entered expenses on every campaign — same panel, same totals — so a campaign’s full cost is one number, not two surfaces you have to add together. The Account detail page also shows a per-account API-cost block, so you can answer “how much did processing this single account actually cost?” directly on the row.
Expected outcome: Every campaign carries a real cost-of-acquisition number you can compare against other campaigns and against the value of a customer.
Common pitfalls:
- Forgetting the costs side. A response rate without a cost is half a story. Logging spend weekly is the minimum cadence to keep the numbers honest.
- Mixing currencies in a single campaign. LeadHunter shows per-currency totals but doesn’t compute CAC across currencies (no FX layer). Either normalise to one currency or run cost analysis externally.
See Track campaign costs and CAC for the full mechanics.
The recurring rhythm
Section titled “The recurring rhythm”After the first cycle, you’ll settle into something like this:
- Daily — send today’s outreach; paste yesterday’s replies; update status on anything that moved.
- Weekly — review fresh scores, approve/reject five to fifteen accounts to keep feeding the model; refresh saved filters as your segmentation evolves; add this week’s new imports; log this week’s ad spend / agency invoices / hours against the right campaign.
- Monthly — run Find Duplicates; revisit the ICP and example URLs based on what closed-won; archive completed campaigns; check the dashboard’s 30-day trend lines; read the per-campaign CAC on the Stats page and pick the most efficient campaigns to do more of.
The product is built to reward this rhythm. The dashboard funnel reads better with consistent activity; the scoring model calibrates better with continuous review; the database stays clean if duplicates are caught monthly rather than yearly.
Where to go next
Section titled “Where to go next”- First-time setup — Quick start.
- Data model — Account, Campaign, Company and Product, ICP and scoring.
- Operations — Run your first campaign, Log a conversation, Track campaign costs and CAC.
- Hitting a snag? — FAQ covers the most common first-week questions.
- Recent changes — What’s new.