Skip to content

FAQ

Grouped by topic. If your question isn’t here, check How LeadHunter works for the bigger picture, or reach out to support.

It’s an outbound research + outreach platform for small teams running their own go-to-market. You bring a product and a rough idea of who you want to sell to; LeadHunter turns that into a scored database of fit accounts, runs the AI scoring against your ICP, helps you draft and translate outbound messages, and keeps the per-account conversation log so the team stays in sync. Read Welcome for the long version.

No. LeadHunter is a communication log, not a sending platform. You keep talking to accounts through your normal channels — email, LinkedIn, Instagram, WhatsApp, phone — and paste the exchanges back into LeadHunter so the AI, the scoring, the dashboard funnel, and the team have full context. The product is intentionally out of the sending path. See Messaging.

How does this differ from Apollo / ZoomInfo / Clay?

Section titled “How does this differ from Apollo / ZoomInfo / Clay?”

Apollo and ZoomInfo are primarily contact databases — you query an existing index of millions of companies and people. LeadHunter is the layer on top of that work: you bring (or discover) your own accounts, and LeadHunter scores them against your specific product’s ICP, drives the daily outreach loop, and measures CAC. Clay is closer in shape but skews heavily towards programmatic data pipelines; LeadHunter skews towards small teams running outbound by hand with strong AI assistance. The three are complementary more than competitive.

Why is the data called Account but the product is called LeadHunter?

Section titled “Why is the data called Account but the product is called LeadHunter?”

The product is named for the verbhunting leads, finding leads, lead generation is the discovery activity. The noun that gets stored is an Account (one row per organisation, neutral about whether it’s a prospect, customer, partner, press, supplier, etc.). Calling the row “Lead” baked a sales-only assumption into a model that needs to span every kind of relationship. The product kept its name because the verb-phrase is still accurate.

Both flavours exist. The hosted product at leadhunter.humans2agents.com is managed for you — provider keys (Gemini, Google Maps, Tavily) are configured by the platform, costs flow through your plan, infrastructure is handled. Self-hosted deployments run the same Django + Next.js + Postgres stack on your own infrastructure, with your own provider API keys (set via environment variables). Self-hosted users own their own data and quota; hosted users get zero-config plus support.

What happens when I try to create an account that already exists?

Section titled “What happens when I try to create an account that already exists?”

LeadHunter merges instead of duplicating. The five-level dedupe stack runs on every account creation; if any level matches, the new data is merged into the existing row (and surfaced for review only for the lowest-confidence fuzzy matches). Nothing is silently overwritten — every unique field from every duplicate is preserved on the survivor, and merge_history records the audit trail. See Merge duplicates.

Not automatically. The absorbed accounts are deleted at the end of a merge; merge_history records snapshots of them, so you can read what was on each side, but restoring them is a manual rebuild. Bias toward spot-checking AI-suggested merges before bulk-approving large lists.

Not directly. Companies are isolated tenant boundaries by design. If you really need to move data, export from the source company (CSV) and re-import into the target — the dedupe machinery handles the rest. Accounts, custom fields, saved filters, campaigns, expenses all need to come over separately.

Yes. From the Accounts list, you can export the visible filter as CSV. Filtering first lets you scope the export to whatever subset you need; the full database is one click away when no filter is applied.

Cancelling closes the account but doesn’t immediately purge the data. Contact support to request a final export and a confirmed deletion timeline; both are on the operational roadmap as self-serve actions but aren’t first-class UI buttons yet.

What happens to a deleted custom field’s values?

Section titled “What happens to a deleted custom field’s values?”

The schema is removed (the field stops showing on account forms and the import wizard refuses new mappings for it), but existing values stay on the underlying account rows. Re-creating a field with the same key brings the values back into view. The trade-off is deliberate: cheap to re-add a field, expensive to recover deleted history. If you genuinely want the values gone, ask support for a one-off cleanup. See Custom fields → Adding, renaming, deleting.

How do I rescore everything against an updated ICP?

Section titled “How do I rescore everything against an updated ICP?”

Open the Product, edit the ICP, click Save. Then use Score campaigns to re-run scoring on every account in every campaign of that product. The score lives per-(Product, Account), so one rescore propagates to every campaign sharing that product. See ICP and scoring → When to rescore.

Scoring runs on Gemini Flash by default — fast and cheap. A batch of a few hundred accounts typically finishes in 5–15 minutes. The campaign page polls the scoring status and updates the score column as rows come back, so you can keep working while it runs. Per-account cost is a fraction of a cent.

Deep research goes beyond the standard Quick mode (which scrapes the home page and extracts the business name + sector + language). It additionally crawls the about / team / staff / contact pages of the website and runs an LLM extraction to pull out 1–3 decision-maker contacts (name, title, sometimes email). Use it for high-value accounts where you’d otherwise hand-research the buying group; the cost per run is meaningfully higher than Quick so it’s not the default.

Why are my first batch of scores reading vague?

Section titled “Why are my first batch of scores reading vague?”

Almost always one of three causes:

  1. The Product description is too thin. One line of positioning isn’t enough to anchor a sharp ICP. Two paragraphs is the minimum.
  2. No example URLs. The 2–3 good and bad example URLs on the Product are the single biggest scoring lever after the ICP itself. Skipping them produces moderate-confidence scores with weak reasons.
  3. No website on the account. Phone-only or name-only accounts can be scored, but the model is leaning on structured fields alone — scores cluster in the moderate range, reasons stay vague. Let auto-enrichment discover the website, or trigger Deep research from the account detail page.

Will scoring improve as I use LeadHunter more?

Section titled “Will scoring improve as I use LeadHunter more?”

Yes — every approve or reject you mark in a campaign’s review queue is a calibration signal. Once a Product crosses ~10 reviewed accounts, your verdicts start feeding back into the scoring prompt as concrete “what good looks like here” examples, and subsequent batches get sharper. The signal is product-scoped — approvals on Product A don’t leak into Product B’s scoring. See ICP and scoring → The feedback loop.

Any of the major languages the underlying LLM supports — English, Spanish, Catalan, French, German, Italian, Portuguese, Dutch, Polish, and many more. Set the account’s language field to the language they read (or let the campaign default win); the AI drafts outbound and translates inbound into your reading language. See Messaging → Language resolution.

Why are the labels excellent / moderate / mismatch instead of great / good / bad?

Section titled “Why are the labels excellent / moderate / mismatch instead of great / good / bad?”

The labels reflect fit confidence, not desirability. mismatch means “the AI thinks this account doesn’t match this product’s ICP” — which is useful information even if the account is a great business. The category labels stay neutral on quality so the score is about predicting outcome, not making a value judgement.

How do I track an account that came in through Google Ads?

Section titled “How do I track an account that came in through Google Ads?”

Set the Acquisition channel to adwords when you create the account (or after the fact, from the account detail page). Channel-specific data — UTM parameters, the gclid, the ad campaign id — goes in the Acquisition metadata block. Inbound channels auto-promote the account from prospect to contacted because the lead has already reached out to you. See Track inbound leads.

Can a single account be in multiple campaigns?

Section titled “Can a single account be in multiple campaigns?”

Yes. Same account in three campaigns of the same product gets scored once (the score lives on the Product–Account pair); per-campaign workflow (review status, outreach status, override score, chosen contact) lives on the CampaignAccount join row. Same account in campaigns of different products gets scored once per (Product, Account) pair.

Can one campaign target multiple products?

Section titled “Can one campaign target multiple products?”

No — each campaign points at exactly one Product. If you need to push two products to the same audience, run two campaigns. The constraint exists because the score is per-(Product, Account), and a campaign that mixed products wouldn’t have a coherent score column.

What does the dashboard funnel actually count?

Section titled “What does the dashboard funnel actually count?”

Distinct accounts, not messages. A campaign that sends three follow-ups to the same account counts as one initiated, not three. Responded counts the account once on its first inbound; closed reads the account’s status-history for any transition to customer. Cohort-attributed back to the initiation date — see How LeadHunter works → Phase 5.

What’s the difference between API costs and Campaign expenses?

Section titled “What’s the difference between API costs and Campaign expenses?”

Two completely different cost surfaces:

  • API costs are auto-tracked — every LLM call, Google Maps lookup, Tavily search the platform fires on your behalf. You don’t enter these; LeadHunter records them.
  • Campaign expenses are user-entered — Adwords spend, agency fees, labor hours, software, content, events. You log them against the campaign they’re paying for.

Together they give the full cost of running a campaign. CAC on the Stats by product and campaign page is computed from the user-entered side. See Usage and costs and Track campaign costs and CAC.

Will LeadHunter stop running operations if I hit a usage limit?

Section titled “Will LeadHunter stop running operations if I hit a usage limit?”

No — there’s no built-in quota enforcement layer today. Heavy usage just produces a bigger bill from the underlying providers. Plan-level enforcement is on the roadmap; until it ships, watch your provider dashboards (Google Cloud, OpenAI, etc.) for the actual quota surface.

Can I run LeadHunter against my own API keys?

Section titled “Can I run LeadHunter against my own API keys?”

For self-hosted deployments, yes — the platform reads provider keys (Gemini, Google Maps, Tavily, etc.) from environment variables. For the hosted product, keys are managed by LeadHunter and costs flow through your plan. Contact support if you’re considering BYO keys for a hosted setup.

Why is the Cost column on my campaigns list empty?

Section titled “Why is the Cost column on my campaigns list empty?”

Two possibilities. Either no expenses have been logged for that campaign yet (the Cost column reflects user-entered campaign expenses — Adwords, agency, hours, etc.), or expenses have been logged but in mixed currencies. The headline column shows the primary currency total; a small + next to it indicates other currencies are present. To log expenses, open the campaign and use the Costs & expenses panel, or click the 💸 icon directly on the row in the campaigns list. See Track campaign costs and CAC.

Why is my campaign’s CAC showing as null?

Section titled “Why is my campaign’s CAC showing as null?”

Two common cases. Either the campaign has expenses in multiple currencies (LeadHunter doesn’t carry an FX layer, so CAC isn’t computed across currencies — you’ll see a per-currency total and a cac_unavailable_reason explanation), or no outreach has happened yet (CAC = total ÷ count; the denominator is zero). Normalise the campaign to one currency and log at least one outbound message; the CAC math will appear.

What’s the difference between archive and delete?

Section titled “What’s the difference between archive and delete?”

Archive soft-hides an item (Product, Campaign, or Account) from default lists but keeps every conversation, score, expense, and audit-trail entry intact. Use it for “completed but I want the history” — or, on an Account, “this business closed / merged / pivoted out, but I want the relationship history.” Accounts can also carry a free-text archive reason that shows up on the account header and audit trail.

Delete is permanent — Products are protected if any campaign references them (archive instead), Campaigns require typing the exact name to confirm, and Accounts require typing the exact account name to confirm and cascade to every related campaign row, score, and conversation. Underlying Accounts are never affected by campaign delete.

Three roles — owner, admin, member — all with the same data access but different team-management capabilities. Owners can delete the Company and remove other owners (with a last-owner guard); admins can add/remove teammates and change roles for members and admins; members read and write all the Company’s data but can’t touch team config. Adding teammates is currently support-driven (no self-serve invite UI yet). See Team and companies.

Several audit-trail surfaces:

  • Account status history — every status change (manual or auto-promoted) records who triggered it, when, with what reason and source. Visible on the account detail page.
  • Conversation log — every message records created_by and timestamp. Threads are visible per-campaign and on the account detail page.
  • Merge history — the survivor of any merge carries merge_history JSON with snapshots of the absorbed rows, who ran the merge, when, and the field-winner decisions.
  • Research recordscreated_by + executed_by per record; the dashboard’s Research page is filterable by user.

No single unified “team activity” view today — different audit trails live on the entities they describe.

Can two teammates edit the same account at the same time?

Section titled “Can two teammates edit the same account at the same time?”

Yes — there’s no row-level locking, and the last save wins. In practice this is rarely a problem because the dashboard typically shows different teammates working on different campaigns or different review queues. If you do hit a conflict (e.g. one person changes status while another edits notes), the second save overwrites — refresh and merge manually.

Where do I report bugs or request features?

Section titled “Where do I report bugs or request features?”

GitHub Issues or your support contact. Bugs with clear reproduction steps and feature requests with a concrete use-case get the fastest response.