Usage and costs
A LeadHunter campaign costs money in two completely different ways, and the docs split them along that line:
- Auto-tracked API costs — every LLM call, Google Maps lookup, Tavily search, etc. that the platform fires on your behalf. LeadHunter records these for you; you don’t enter them.
- User-entered campaign expenses — Adwords spend, agency fees, internal labor hours, software subscriptions, content production, event costs. You log these against the campaign that’s paying for them.
Both attribute back to the campaign that triggered them, so the full picture is auto-tracked API costs + user-entered expenses = real campaign cost. Both surfaces show on the campaign detail page side-by-side. Cost-of-acquisition (CAC) is computed from the user-entered side only — auto-tracked API spend is reported alongside but kept separate (it’s in USD, your expenses may not be, and we don’t FX-convert).
Auto-tracked API costs
Section titled “Auto-tracked API costs”Every external call the platform makes is logged as a usage row with an attributed cost. Ten providers are tracked out of the box:
| Provider | Category | What it’s used for |
|---|---|---|
| Google Maps Places API | Discovery | Text searches and place-detail lookups |
| Tavily AI Search | Research | Web search during AI deep research |
| Apollo.io | Enrichment | Contact discovery (when integrated) |
| OpenAI GPT-4o | LLM | ICP generation, scoring, drafting |
| OpenAI GPT-4o Mini | LLM | Cheaper drafting and translation |
| Google Gemini Flash | LLM | Default scoring model — fast + cheap |
| Google Gemini Pro | LLM | Higher-quality reasoning on demand |
| Anthropic Claude Opus | LLM | Highest-tier reasoning |
| Anthropic Claude Sonnet | LLM | Mid-tier reasoning |
| Anthropic Claude Haiku | LLM | Fastest Anthropic tier |
The provider list lives in seeded data and grows over time. New providers added to the platform get their own rows; the API exposes them at GET /api/api-providers/.
What gets recorded per call
Section titled “What gets recorded per call”Each usage row carries:
- Provider — which service the call went to.
- Model name — for LLM calls, the exact model (e.g.
gemini-3.1-flash-lite,gpt-4o-mini) so a campaign’s spend can be split across the models that ran it. - Operation type — what kind of call it was (e.g.
completion,places_search,details). - Project, research, campaign, account — four-way attribution. Per-account is the lever for “how much did this account cost to enrich?”; per-campaign is the lever for “what’s this campaign actually burning?”.
- API calls + input/output tokens — the volume that produced the cost.
- Cost in USD, computed from the provider’s pricing model (per-call or per-token).
- Langfuse trace id — when LLM observability is on, the trace id links the cost row back to the prompt/response in Langfuse.
Costs are also rolled up nightly for fast trend queries — that’s what powers the daily-trend chart on the cost endpoints.
Where to read the API-cost numbers
Section titled “Where to read the API-cost numbers”- Per campaign — the Costs & expenses panel on every campaign detail page shows the auto-tracked LLM total in USD with a per-model breakdown, side-by-side with your user-entered expenses. The top models by spend appear with their call count.
- Per account — the auto-tracked LLM cost for one account (its website discovery, AI enrichment, single-lead rescores) is exposed on the account detail payload and visible on the account drawer.
- Per provider, per research, daily trend — available via API today; a polished cross-project Usage page is on the roadmap. If you need a specific cut sooner (auditing a particular import’s enrichment cost, comparing provider spend month-over-month), reach out to support and we’ll pull it.
Track the Usage page rollout in What’s new.
User-entered campaign expenses
Section titled “User-entered campaign expenses”Distinct from auto-tracked API costs. These are the things LeadHunter has no way of knowing about — ad spend on platforms you pay directly, agency invoices, the hours your team puts in. You log them as line items against a specific campaign.
Ten kinds: adwords, meta_ads, linkedin_ads, other_ads, agency, labor, software, content, event, other. Multi-currency aware (no FX conversion); CAC is only computed when a campaign’s expenses share one currency.
The mechanics — the inline panel on the campaign detail page, the 💸 quick-add on the campaigns list, the Cost column on the stats page — live in their own guide: see Track campaign costs and CAC.
Four-way attribution
Section titled “Four-way attribution”Both surfaces share an attribution model so the underlying data can always answer “what did this cost?” — per campaign, per account, per research record, per provider. The campaign detail page covers the campaign-level slice; the account detail covers the per-account enrichment slice; deeper breakdowns (per-provider, per-research, cross-campaign) ship with the Usage page.
Plan limits and billing
Section titled “Plan limits and billing”Today, LeadHunter doesn’t block operations when usage hits a threshold — there’s no built-in quota enforcement layer. Heavy usage just produces a bigger bill from the underlying providers. If you’re running LeadHunter on your own provider credentials (Gemini key, Maps key, Tavily key, …), you’ll see the actual usage on those provider dashboards too.
Plan / quota / billing-period enforcement at the LeadHunter level is on the roadmap. Until it lands, the per-provider dashboards on Google Cloud / OpenAI / Anthropic / Tavily remain the source of truth for “am I about to overshoot my own budget?”.
Keeping costs down
Section titled “Keeping costs down”Five concrete patterns:
- Score reuse across campaigns. The fit score lives on the (Product, Account) pair, not on the CampaignAccount. Adding the same account to a second campaign of the same product reuses the existing score — zero Gemini cost, instant. Don’t create per-campaign products unless you genuinely need different ICPs.
- Auto-enrichment short-circuits. New accounts get website-discovery + scraping for free only when they arrive without those fields. Imports that ship a populated
website+website_contentskip the enrichment job — the auto-enrichment signal checks for this. Pre-populating the import file saves quota. - Dry-run before bulk imports. Set
dry_run=trueon/import_mapped/. It runs every dedupe check and shows you the projected merge / create / fuzzy-candidate counts without spending a Gemini call on extraction. Stops the “I just discovered the wrong column mapping after spending €40 on enrichment” problem. - Use saved filters over fresh Google Maps sweeps. Once your database has the bike shops in Berlin, building a saved filter that finds them is free. Re-running the original Google Maps query a month later only catches new shops — but it also re-pays the place-details API cost for the ones it merges into existing rows. If you don’t expect a lot of new places, prefer the filter.
- Specific Product descriptions. A sharp Product description + good example URLs means the first batch of scoring is well-calibrated. A vague description means a lot of moderate-confidence scores, more manual overrides, more rescoring as you tighten the ICP — every rescore is fresh Gemini cost.
Common pitfalls
Section titled “Common pitfalls”- Confusing the two cost surfaces. CAC is computed from the user-entered side only — the auto-tracked side is reported in USD next to it, not folded in. A campaign with €0 user-entered expenses but $40 of auto-tracked Gemini scoring will read “no CAC” but still show the $40 LLM total. For full-cost analysis, eyeball both cards together.
- Forgetting to log user-entered spend. Auto-tracked costs accumulate by themselves. Adwords invoices and agency fees only show up in the CAC math if you log them. Make logging spend a weekly habit.
- Re-running the same Google Maps search. Each re-run hits the place-details API for every result, even ones that merge into existing rows. Slightly cheaper than the first run (since the dedupe stack short-circuits the creation path) but not free.
- Expecting plan limits to stop you. They won’t, today. Watch the per-provider dashboards on Google Cloud / OpenAI / Anthropic.
Read next
Section titled “Read next”- Track campaign costs and CAC — the user-entered side, in detail, with worked examples.
- Campaign → Costs and CAC — how the two surfaces combine into per-campaign cost.
- Research — the audit-trail records that auto-tracked API cost attaches to.
- ICP and scoring — score caching and when to rescore (the biggest LLM-cost lever).