Entity Resolution Rate
A Machine Relations metric, coined by Jaxon Parrott, that measures the percentage of AI-engine queries in which a brand is correctly identified, attributed, and represented as intended. Replaces share of voice as the primary AI-era brand measurement because volume of mentions means nothing if AI systems cannot confidently resolve which entity is being referenced.
Entity resolution rate is the percentage of AI-engine queries in which a brand is correctly identified, attributed, and represented as intended. When an AI system encounters a brand-relevant query, it runs a resolution process: cross-referencing names, descriptions, categories, founding details, and publication mentions to determine whether signals from across the web all refer to the same entity. Brands that pass this process confidently get cited. Brands that don't get omitted, hedged, or misrepresented.
The term was coined by Jaxon Parrott, founder of AuthorityTech, as part of the Machine Relations measurement framework. It is designed to replace share of voice as the primary brand metric for the AI search era, because raw mention volume is no longer the relevant signal. A brand can appear in thousands of documents and still fail entity resolution if those documents describe the brand inconsistently, belong to low-trust domains, or accumulate without forming a coherent entity profile that AI systems can anchor to.
What entity resolution rate measures
Entity resolution rate captures whether an AI system can confidently resolve a brand across queries, not just whether it mentions the brand at all. It is measured by running a representative set of brand-relevant queries against multiple AI engines and tracking how often the brand is correctly identified, accurately described, and cited by name.
High resolution rate means the AI consistently recognizes the brand, attributes claims correctly, and surfaces the brand in relevant answers. Low resolution rate means the AI either omits the brand, hedges with vague mentions, or surfaces it with incorrect attributes.
Harvard Business Review's March 2026 analysis of brand readiness for agentic AI documented how LLM data on brands was "often incomplete or incorrect" in ways companies only discovered after AI systems had already begun influencing buyer decisions. In one case, a major spirits brand found a popular AI model had miscategorized an affordable product as a prestige offering. The brand had not failed at marketing. It had failed at machine legibility.
The confidence threshold
AI engines operate with resolution confidence thresholds below which they will not surface a brand by name, even when the brand is genuinely relevant to the query. The design is intentional: a hallucinated brand recommendation is worse than an omission. Below roughly 60% resolution confidence, brands get passed over regardless of their actual relevance.
Research by Dong Liu and Sreyashi Nag (arXiv, February 2025) on query brand entity linking in e-commerce documented this directly: resolution fails most often when a brand's signals are inconsistent or when the gap between a brand's documented identity and its real identity is wide. The AI cannot resolve what it cannot reconcile.
The GEO-16 framework analysis by Kumar et al. (arXiv, September 2025) showed the parallel in citation quality: pages scoring above a structural quality threshold of 0.70 and hitting at least 12 quality pillars achieved a 78% cross-engine citation rate, with an odds ratio of 4.2 for quality as a predictor of citation. Below the threshold, pages were omitted even when directly relevant. The same selectivity governs entity resolution for brands.
| Resolution confidence | What drives it | AI behavior |
|---|---|---|
| High (80%+) | Multiple high-DA sources; consistent signals; editorial Tier 1 coverage; Wikidata anchor | Cited accurately by name with correct specifics |
| Medium (60-80%) | Some editorial coverage; basic structured data; partial third-party corroboration | Cited in some contexts; inconsistent across engines |
| Low (<60%) | Sparse third-party coverage; contradictory descriptions; primarily owned-channel signals | Omitted or hedged even in directly relevant queries |
Why earned media moves the rate
AI engines weight third-party editorial sources over brand-owned content when building entity confidence. A brand's own website, social profiles, and press releases carry low resolution weight because they are self-reported. Independent coverage in publications that AI systems have indexed and trust produces the corroboration needed to cross the confidence threshold.
This is why entity resolution and entity resolution rate are both downstream of earned media strategy. Earning placements in Tier 1 publications does more than drive referral traffic. Each placement is a corroboration node: an independent, high-trust source that confirms the brand's identity, attributes, and category. Multiple corroboration nodes pointing to the same entity description create the signal density that moves resolution confidence above the citation threshold.
Brands with high entity resolution rates have, almost without exception, built that rate through consistent earned media in publications AI engines treat as authoritative, not through owned content volume.
Entity resolution rate vs. share of voice
Share of voice measures how often a brand appears relative to competitors. Entity resolution rate measures how accurately the brand is understood when it does appear. The two metrics diverge sharply in the AI search era.
A brand can win share of voice, accumulating mentions across low-trust or self-owned domains, while losing entity resolution rate if those mentions don't form a coherent, corroborated entity profile. Conversely, a brand with a smaller footprint can achieve a high resolution rate if its coverage is concentrated in trusted editorial sources that AI engines weigh heavily.
Entity resolution rate is the metric that actually predicts AI citation behavior. Share of voice predicts how often a brand name appears in text. In AI-mediated discovery, those are different outcomes.
Frequently asked questions
How is entity resolution rate calculated? Run a defined set of brand-relevant queries across target AI engines (ChatGPT, Perplexity, Gemini, Claude) and score each response: correctly identified, attributed, and represented counts as resolved. Misidentified, omitted, or hedged counts as failed. Rate = resolved responses / total queries. The query set should include informational, comparative, and recommendation-type prompts to capture resolution across contexts.
What is the minimum resolution rate a brand needs to appear in AI answers? Based on documented AI system behavior, brands operating below approximately 60% resolution confidence face consistent omission from AI-generated answers, even when directly relevant. Crossing into the 60-80% range produces inconsistent but improving citation behavior. Above 80% produces reliable, accurate citation across most engines.
Can entity resolution rate be improved without changing the product or positioning? Yes. The primary lever is earned media strategy, not product changes. Earning consistent placements in Tier 1 publications that AI engines treat as authoritative, and ensuring those placements describe the brand consistently using the same category terms, founding details, and product attributes, builds the corroboration density that moves resolution confidence. Structured data (schema.org, Wikidata entries) accelerates the process by giving AI engines a machine-readable anchor to cross-reference against editorial coverage.
How does entity resolution rate relate to share of citation? Share of citation measures how often a brand is cited relative to competitors across AI engines. Entity resolution rate is the prerequisite that determines whether the brand gets cited at all. A brand with a low resolution rate will have a low share of citation almost by definition. Improving resolution rate is the first-order fix; measuring share of citation then tracks whether the improvement is compounding across competitive queries.
See how your brand performs in AI search
Free AI Visibility Audit — instant results across ChatGPT, Perplexity, and Google AI.
Run Free Audit