Skip to content

Usage and costs

A LeadHunter campaign costs money in two completely different ways, and the docs split them along that line:

  • Auto-tracked API costs — every LLM call, Google Maps lookup, Tavily search, etc. that the platform fires on your behalf. LeadHunter records these for you; you don’t enter them.
  • User-entered campaign expenses — Adwords spend, agency fees, internal labor hours, software subscriptions, content production, event costs. You log these against the campaign that’s paying for them.

Both attribute back to the campaign that triggered them, so the full picture is auto-tracked API costs + user-entered expenses = real campaign cost. Both surfaces show on the campaign detail page side-by-side. Cost-of-acquisition (CAC) is computed from the user-entered side only — auto-tracked API spend is reported alongside but kept separate (it’s in USD, your expenses may not be, and we don’t FX-convert).

Every external call the platform makes is logged as a usage row with an attributed cost. Ten providers are tracked out of the box:

ProviderCategoryWhat it’s used for
Google Maps Places APIDiscoveryText searches and place-detail lookups
Tavily AI SearchResearchWeb search during AI deep research
Apollo.ioEnrichmentContact discovery (when integrated)
OpenAI GPT-4oLLMICP generation, scoring, drafting
OpenAI GPT-4o MiniLLMCheaper drafting and translation
Google Gemini FlashLLMDefault scoring model — fast + cheap
Google Gemini ProLLMHigher-quality reasoning on demand
Anthropic Claude OpusLLMHighest-tier reasoning
Anthropic Claude SonnetLLMMid-tier reasoning
Anthropic Claude HaikuLLMFastest Anthropic tier

The provider list lives in seeded data and grows over time. New providers added to the platform get their own rows; the API exposes them at GET /api/api-providers/.

Each usage row carries:

  • Provider — which service the call went to.
  • Model name — for LLM calls, the exact model (e.g. gemini-3.1-flash-lite, gpt-4o-mini) so a campaign’s spend can be split across the models that ran it.
  • Operation type — what kind of call it was (e.g. completion, places_search, details).
  • Project, research, campaign, account — four-way attribution. Per-account is the lever for “how much did this account cost to enrich?”; per-campaign is the lever for “what’s this campaign actually burning?”.
  • API calls + input/output tokens — the volume that produced the cost.
  • Cost in USD, computed from the provider’s pricing model (per-call or per-token).
  • Langfuse trace id — when LLM observability is on, the trace id links the cost row back to the prompt/response in Langfuse.

Costs are also rolled up nightly for fast trend queries — that’s what powers the daily-trend chart on the cost endpoints.

  • Per campaign — the Costs & expenses panel on every campaign detail page shows the auto-tracked LLM total in USD with a per-model breakdown, side-by-side with your user-entered expenses. The top models by spend appear with their call count.
  • Per account — the auto-tracked LLM cost for one account (its website discovery, AI enrichment, single-lead rescores) is exposed on the account detail payload and visible on the account drawer.
  • Per provider, per research, daily trend — available via API today; a polished cross-project Usage page is on the roadmap. If you need a specific cut sooner (auditing a particular import’s enrichment cost, comparing provider spend month-over-month), reach out to support and we’ll pull it.

Track the Usage page rollout in What’s new.

Distinct from auto-tracked API costs. These are the things LeadHunter has no way of knowing about — ad spend on platforms you pay directly, agency invoices, the hours your team puts in. You log them as line items against a specific campaign.

Ten kinds: adwords, meta_ads, linkedin_ads, other_ads, agency, labor, software, content, event, other. Multi-currency aware (no FX conversion); CAC is only computed when a campaign’s expenses share one currency.

The mechanics — the inline panel on the campaign detail page, the 💸 quick-add on the campaigns list, the Cost column on the stats page — live in their own guide: see Track campaign costs and CAC.

Both surfaces share an attribution model so the underlying data can always answer “what did this cost?” — per campaign, per account, per research record, per provider. The campaign detail page covers the campaign-level slice; the account detail covers the per-account enrichment slice; deeper breakdowns (per-provider, per-research, cross-campaign) ship with the Usage page.

Today, LeadHunter doesn’t block operations when usage hits a threshold — there’s no built-in quota enforcement layer. Heavy usage just produces a bigger bill from the underlying providers. If you’re running LeadHunter on your own provider credentials (Gemini key, Maps key, Tavily key, …), you’ll see the actual usage on those provider dashboards too.

Plan / quota / billing-period enforcement at the LeadHunter level is on the roadmap. Until it lands, the per-provider dashboards on Google Cloud / OpenAI / Anthropic / Tavily remain the source of truth for “am I about to overshoot my own budget?”.

Five concrete patterns:

  1. Score reuse across campaigns. The fit score lives on the (Product, Account) pair, not on the CampaignAccount. Adding the same account to a second campaign of the same product reuses the existing score — zero Gemini cost, instant. Don’t create per-campaign products unless you genuinely need different ICPs.
  2. Auto-enrichment short-circuits. New accounts get website-discovery + scraping for free only when they arrive without those fields. Imports that ship a populated website + website_content skip the enrichment job — the auto-enrichment signal checks for this. Pre-populating the import file saves quota.
  3. Dry-run before bulk imports. Set dry_run=true on /import_mapped/. It runs every dedupe check and shows you the projected merge / create / fuzzy-candidate counts without spending a Gemini call on extraction. Stops the “I just discovered the wrong column mapping after spending €40 on enrichment” problem.
  4. Use saved filters over fresh Google Maps sweeps. Once your database has the bike shops in Berlin, building a saved filter that finds them is free. Re-running the original Google Maps query a month later only catches new shops — but it also re-pays the place-details API cost for the ones it merges into existing rows. If you don’t expect a lot of new places, prefer the filter.
  5. Specific Product descriptions. A sharp Product description + good example URLs means the first batch of scoring is well-calibrated. A vague description means a lot of moderate-confidence scores, more manual overrides, more rescoring as you tighten the ICP — every rescore is fresh Gemini cost.
  • Confusing the two cost surfaces. CAC is computed from the user-entered side only — the auto-tracked side is reported in USD next to it, not folded in. A campaign with €0 user-entered expenses but $40 of auto-tracked Gemini scoring will read “no CAC” but still show the $40 LLM total. For full-cost analysis, eyeball both cards together.
  • Forgetting to log user-entered spend. Auto-tracked costs accumulate by themselves. Adwords invoices and agency fees only show up in the CAC math if you log them. Make logging spend a weekly habit.
  • Re-running the same Google Maps search. Each re-run hits the place-details API for every result, even ones that merge into existing rows. Slightly cheaper than the first run (since the dedupe stack short-circuits the creation path) but not free.
  • Expecting plan limits to stop you. They won’t, today. Watch the per-provider dashboards on Google Cloud / OpenAI / Anthropic.