Skip to content

Target persona and scoring

The target persona describes who a campaign is reaching, and lives on the (Product, Goal) pair: a sales campaign of a product carries a buyer persona, a press campaign of the same product carries a publications-and-journalists persona, a recruiting campaign carries a candidate persona — each independent, each with its own approval state. We historically called this the “Ideal Customer Profile” (ICP) and the term is still used as an alias in sales-shaped flows, but the underlying object is goal-agnostic. Every account that enters a campaign gets a fit score for that goal: a number 0–10, a categorical label, and a short list of typed justifications.

Scoring is the bridge between your raw account database and an actionable campaign list. Done well, it turns “here are 2,000 leads” into “here are the 80 to focus on this week.”

LeadHunter’s target persona is structured — not just a paragraph. The AI generates it from your Product’s description, website, and example URLs, then you review and approve.

FieldWhat it holds
nameShort label for the persona (e.g. “The Tech-Forward CFO” for sales, “Senior B2B SaaS Reporter” for press, “Senior Backend Engineer, Series A Berlin” for recruiting). Auto-generated; editable.
summaryPlain-language overview of who you’re targeting. Read first by the model.
industriesList of sectors / editorial domains / specialties relevant to the goal.
job_titlesList of relevant decision-maker / journalist / candidate / partner roles.
pain_pointsWhat this persona cares about — what would motivate them to engage. For sales it’s product-relevant pain; for press it’s editorial angles; for candidates it’s career drivers.
company_size_min / company_size_maxSize band — employees for company targets, audience size for publications, team size for partners. Either may be omitted.
confidence_level / confidence_scoreHow confident the AI is in this draft (low / medium / high, plus a 0–100 score). A low-confidence first draft is the signal to spend more time editing before approval.
source_urlsURLs the AI cited while drafting. Useful when reviewing — you can see what the model was reading.
is_approvedTrue once you click Approve. Scoring still runs against a draft, but approval is the signal the document is calibrated.
revision_countNumber of times the persona has been re-generated. Bumped on every regenerate.
revision_feedbackFree-text you provide when asking the AI to redraft (“focus more on tier-1 trade press, less on general business pubs”) — fed into the regeneration prompt.

You can edit any field after generation; the model uses the whole structured persona plus the Product description on every score.

A fresh ICP starts in Pending approval. You see the AI’s draft, edit it, click Approve to mark it ready for scoring. Until approval, scoring runs against the draft anyway — but the “this ICP is approved” flag is the signal that the document is calibrated and stable.

If the first draft is off, you have two options:

  1. Edit in place — change individual fields and re-approve. Best for small refinements.
  2. Regenerate with feedback — fill in revision_feedback (“focus on independent garages, drop the enterprise framing”) and ask the AI to re-draft. The feedback flows into the regeneration prompt; revision_count increments. Best for course corrections.

You can revise the ICP at any time. Pay attention to confidence_level on the draft — a low-confidence draft from a sparse Product description usually means the model couldn’t anchor; sharpening the Product description before regenerating is more effective than editing the ICP by hand.

Scoring asks “how well does this account fit the target?” — but the answer to “what target?” depends on the campaign’s goal. For a Sales campaign it’s “how well do they fit our buyer profile”. For a Press campaign it’s “how well does this outlet fit our story”. For Recruiting it’s “how well does this person fit the role”. For Partnership it’s “how strong a partner would this be”.

The 0-10 scale and the typed reasons (positive / negative / neutral) stay the same regardless of goal, so a 9 always means very strong fit and a 3 always means clear mismatch. What shifts is the axis being rated. The model is told the goal up front so the score reasons it returns are framed correctly — “covers our space editorially” for Press, “complementary product stack” for Partnership, “recent stack matches the role” for Recruiting.

See Outreach reasons and targets for the full goal list.

InputWhere it comes from
Campaign goalWhat this campaign is trying to do. Decides which axis the score is rated against (see above).
Public website content + metadataAuto-enrichment scrapes the account’s website on create; the cached text is what the model reads (not re-fetched on every score).
Custom field valuesWhatever you’ve defined for the Company under Settings → Custom fields and filled on the account. Goes in as structured key-value context.
Structured account columnsbusiness_type, specialization, language, phone, country, city, rating.
Example good / bad URLs2–3 of each on the Product. Single most powerful lever after the ICP itself — short of a sharp ICP, sharp example URLs do more for scoring quality than anything else.
Calibration examplesRecent approvals + rejections, once the feedback loop crosses the ~10-account threshold for the product.
The full ICPEvery structured field above, plus the Product’s description.
FieldTypeMeaning
ai_score0–10Numeric fit score.
score_labelenumexcellent (≥8), moderate (5–7), mismatch (<5).
score_reasonsarray2–4 short, typed justifications — see below.

Score reasons are typed for at-a-glance reading

Section titled “Score reasons are typed for at-a-glance reading”

Each reason carries a type so the UI can show you what’s pulling the score up vs. down without you having to read prose:

TypeIconWhat it meansExample
Positivegreen checkConcrete ICP match”FM rock station — matches ICP industry.”
Negativered XConcrete ICP gap”Single-location operator — below the size range.”
Neutralamber warningRelevant context that doesn’t clearly help or hurt”Spanish-language content.”

excellent scores lead with positives, mismatch scores lead with negatives, and when both signals are present the AI includes at least one of each so the trade-off is visible.

Scoring runs on Gemini Flash by default — fast and cheap. Per-account cost is a fraction of a cent (typically <€0.001); a batch of a few hundred accounts finishes in 5–15 minutes. Cost is tracked per account and per campaign, with a per-model breakdown on both surfaces — see Usage and costs.

The campaign’s scoring_status flips to failed when the background job hits an error (provider quota exhausted, malformed account data the LLM rejects, etc.). Scores from batches that completed before the failure are kept — re-running scoring will pick up where it left off and only score the accounts that don’t yet have a score. Today the campaign page doesn’t surface the failure prominently, so if a rescore finishes faster than expected and you see fewer scored accounts than you launched with, check scoring_status on the campaign.

  • Conversation history with the account — past outbound and inbound aren’t part of the scoring prompt. (They are part of the AI-continue drafting prompt — different surface.)
  • Activity from other companies in your tenant — scoring is strictly scoped to the active Company’s Product and Account.
  • Paid data sources unless you’ve integrated them — LeadHunter’s default scoring doesn’t subscribe to ZoomInfo, Apollo, etc. If you’ve enriched accounts with data from those services and stored it in custom fields, that data goes in; the integrations themselves don’t.

The notes field on the account is sent to the scoring prompt (truncated to 500 characters). Notes are a useful place to drop short context the model should weigh — “Referred by an existing customer”, “Recently raised Series B”, “Operates in two countries” — and they’ll influence the score and the reasons. Treat the field like operator-shared AI context, not private scratch space.

Score caching: one per (Product, Account, Goal)

Section titled “Score caching: one per (Product, Account, Goal)”

The fit score lives on the (Product, Account, Goal) triplet — one score row per campaign goal. That means:

  • The same account in three sales campaigns of the same product is scored once — adding it to a second sales campaign reuses the existing score (no Gemini cost, no waiting).
  • The same account in a sales campaign and a press campaign of the same product gets two scores — they’re rated against different criteria (buyer fit vs editorial fit) so reuse would be wrong. Each goal owns its own score.
  • Swap a campaign to point at a different product, and all its accounts will need to be rescored. The dashboard offers a single-click bulk rescore for this case.
  • Switch a campaign’s goal and existing scores stay as they were (they’re for the previous goal); rescore to get fresh scores under the new framing.

Per-campaign overrides are scoped to a single CampaignAccount and don’t touch the underlying (Product, Account, Goal) score, so a manual adjustment in one campaign doesn’t bleed into the others. The API for setting an override exists today; the in-app UI for it is on the roadmap, so for now overrides are set programmatically rather than from the scoring page.

The feedback loop — your reviews calibrate the model

Section titled “The feedback loop — your reviews calibrate the model”

The biggest lever for improving scoring quality isn’t tweaking the ICP — it’s marking accounts as Approved or Rejected in the campaign review queue. Once a (product, goal) pair crosses about ten reviewed accounts across all its campaigns, LeadHunter starts feeding your most recent approvals and rejections to the scoring prompt as concrete examples of “what good looks like here” and “what to skip even when the surface details look fine.”

Below that threshold, the calibration few-shot is held back — small samples are noisier than no sample, so the original example URLs on the Product remain the only anchor.

The signal is goal-scoped within a product: approvals on a sales campaign of Product X don’t leak into a press campaign of the same product, and vice versa. The two are rated against different criteria — mixing them would teach the model the wrong lesson. The same Account approved in two sales campaigns of the same product counts once; the same Account being rescored never gets fed its own past verdict (avoids a self-reinforcing loop).

This is why the first campaign on each new (product, goal) pair is the most important one to be thorough on the review step — every approve/reject you make there is shaping the next several hundred scores for that pair.

Reasons are written in the campaign’s language

Section titled “Reasons are written in the campaign’s language”

ICP summary text, score reasons, and outbound message drafts are all produced in the same language. The chain is Campaign.communication_language → Product.communication_language → your account default → English. Set the campaign’s language to run parallel campaigns of the same product in different markets without duplicating the product.

The Account’s own language field affects outbound message drafting (the message is written in the account’s language) but not scoring — scoring renders reasons in the campaign’s language regardless of where the account is located.

Some accounts arrive with a name, a city, and nothing else — phone-only entries, scanned business cards, old CRM exports. Scoring can still run, but with no website content the model is leaning on the structured fields alone (business type, custom fields, location). Scores tend to cluster in the moderate-mismatch range and the reasons are vaguer.

Two ways to improve them:

  1. Let auto-enrichment discover and scrape the website (the default for newly-created accounts).
  2. Trigger Deep research from the account detail page (multi-page crawl + contact extraction).

Once content lands, rescoring picks up the new signal.

Once a scoring pass completes, the bottom of the list — the mismatch band — is usually not worth your outreach time. The campaign-accounts panel has a one-click Remove low-fit action that clears these rows, with safe defaults that preserve anything you’ve already approved or started outreach on. See Campaign → Pruning the low-fit tail after scoring for what it removes, what it keeps, and the per-campaign scope.

Re-run scoring when:

  • You edit the ICP — the largest possible scoring shift.
  • You change the example URLs on the Product — second-largest shift.
  • You enrich accounts with new data — website discovery completed for accounts that had no URL; custom fields backfilled from an import.
  • You’ve passed the calibration threshold — if you’ve just reviewed your tenth account, the next batch of fresh scores benefits from the calibration examples; existing scores don’t update until you rescore them.

The same account isn’t auto-rescored across campaigns of the same product — the score lives on the (Product, Account) pair and is sticky until you explicitly rescore.

Two entry points for re-running scoring:

ModeWhereUse when
Quick-score on one accountThe account detail page → Score button.You’ve just enriched one account and want its score refreshed without re-running the whole campaign.
Bulk-rescore the whole campaignThe campaign page → Re-score action.You edited the ICP, changed example URLs, or just crossed the calibration threshold and want the existing accounts updated.

A rescore overwrites the previous score in place — there’s no per-account score history kept. If you want to see what the previous score was for a specific account before re-running, screenshot it or take a note; the new score will replace it.

The campaign detail page shows a 10-bucket histogram of fit scores after scoring runs (0-1, 1-2, … 9-10). It’s the fastest way to read whether the ICP is well-calibrated. See Campaign → Score distribution.

  • Run your first campaign — the operator-side workflow, including the review-and-calibrate pass.
  • Company and Product — why ICP lives on the product and not the campaign, and when to spin up a second product instead of a second ICP.
  • Custom fields — define the structured business context the scoring prompt reads.
  • Account — what scoring actually reads from, and the operator-only notes field it deliberately doesn’t.