Share of Citation

The percentage of total AI engine citations your brand earns within a defined category or query set — the Machine Relations replacement for share of voice. Where traditional marketing measured brand presence across media impressions, share of citation measures brand presence across AI-generated answers in ChatGPT, Perplexity, Gemini, and similar engines.

AuthorityTech AI Visibility Audit

Share of Citation is the percentage of total category citations your brand earns in AI-generated answers across engines like ChatGPT, Perplexity, and Gemini — the Machine Relations replacement for share of voice. Where traditional marketing measured brand impressions (how many times your message was shown), share of citation measures brand authority (how often AI engines cite your brand as the answer to the questions your ICP is actually asking).

The shift from impressions to citations is not just measurement evolution — it is a structural change in how brand discovery works. AI engines do not show impressions. They synthesize answers. When a buyer asks, "What's the best AI visibility platform for B2B SaaS?" the engines pick one, two, or three brands to cite. Share of citation tells you whether yours is one of them — and if not, who is taking that query territory instead.

How share of citation works

Share of citation is calculated by sampling AI engine responses across a defined query set — typically 50 to 200 high-intent queries within a specific category — and measuring citation frequency per brand.

Basic formula: (Your brand citations / Total citations in query set) × 100 = Share of Citation %

Example: A category audit across 100 queries returns 320 total citations across 18 brands. If your brand was cited 45 times, your share of citation is 14.1%. If a competitor was cited 92 times, their share is 28.8% — double yours. That gap is the territory you need to reclaim.

Cross-engine context matters. According to research by Ronald Sielinski published in "Quantifying Uncertainty in AI Visibility" (March 2026), citation distributions across AI engines follow a power-law form and exhibit substantial variability across repeated samples. Single-run point estimates of citation share provide a "misleadingly precise picture of domain performance." Best practice: sample repeatedly across time windows (daily over 7-9 days minimum) and report share of citation with confidence intervals, not single-run snapshots.

Why share of citation replaced share of voice

Share of voice (SOV) was the dominant brand visibility metric from the 1970s through the early 2020s. It measured your brand's percentage of total media impressions — ad spend, PR mentions, social mentions, search impressions — against competitors. SOV was built for a world where media exposure equaled market presence.

That world ended when AI engines became the first reader of media.

Share of Voice (Legacy) Share of Citation (Machine Relations Era)
Measures impressions — how many people could see your brand Measures citations — how often AI engines choose to cite your brand as the answer
Success = highest ad spend + clip volume + social mentions Success = highest citation rate in AI-generated answers for high-intent queries
Optimizes for reach — more people exposed Optimizes for authority — more machines confidently citing
Measurement: media monitoring tools (Meltwater, Cision, Muck Rack) Measurement: AI visibility platforms (Profound, Peec AI, Ahrefs Brand Radar, AuthorityTech)
Human-mediated discovery — journalists, editors, ad buyers Machine-mediated discovery — AI engines deciding what to surface

The terminal limitation of share of voice: It counts media presence. It does not measure whether AI engines treat that presence as a citable source. A brand can have massive SOV — millions in ad spend, hundreds of press mentions, strong social presence — and still have near-zero share of citation if AI engines do not consider those signals authoritative enough to cite in answers. Research from Wrodium (September 2025) on AI answer engine citation behavior found that engines differ markedly in the quality of pages they cite, with metadata freshness, semantic HTML, and structured data being the strongest predictors of citation — not total media volume.

What determines share of citation

Share of citation is driven by three core factors: earned authority, entity clarity, and citation architecture — the first three layers of the Machine Relations stack.

1. Earned authority (Layer 1) — Are you published in sources AI engines already trust?

Research from the University of Toronto found that AI search engines show a "systematic and overwhelming bias" toward earned media (third-party authoritative sources) and against brand-owned content. According to WorldCom PR Group industry analysis (October 2025), up to 90% of citations driving brand visibility in LLMs come from earned media. AI engines trust Forbes, TechCrunch, Wall Street Journal — not your blog. If you are not published in Tier 1 outlets, your share of citation starts at zero regardless of how good your content is.

2. Entity clarity (Layer 2) — Can AI engines confidently resolve who you are?

Entity resolution is AI's ability to identify, retrieve, compare, and cite a brand without ambiguity. If your brand shares a name with another company, if schema markup is missing or inconsistent, if knowledge panels are incomplete, AI engines cannot confidently cite you even when your content is strong. Share of citation collapses when entity signals are weak.

3. Citation architecture (Layer 3) — Is your content extractable?

AI engines do not cite pages that are hard to parse. According to the Wrodium GEO-16 Framework (September 2025), pages with structured data, semantic HTML, and recency metadata achieve a 78% cross-engine citation rate at quality thresholds (GEO score ≥ 0.70 and ≥ 12 pillar hits). Pages without those signals get filtered out regardless of content quality. Citation architecture is the formatting, metadata, and semantic structure that lets AI engines extract clean claims from your content.

How to measure share of citation

Step 1: Define your query set. Identify 50-200 high-intent queries your ICP asks when searching for solutions in your category. Mix definitional queries ("what is [category]"), comparison queries ("best [category] platforms for [vertical]"), and use-case queries ("how to [solve problem] with [category]").

Step 2: Sample AI engine responses. Submit each query to ChatGPT, Perplexity, Gemini, and Google AI Overviews. Record which brands are cited in each answer. Best practice per Sielinski (2026): sample daily over 7-9 days, not single-run. Citation distributions are stochastic, not fixed.

Step 3: Aggregate citation counts per brand. Count total citations for your brand, each competitor, and the full field. Calculate percentages. Your share of citation is (your citations / total citations) × 100. Confidence intervals matter — report ranges, not point estimates.

Step 4: Identify citation gaps. Where competitors are cited and you are not, that is a citation gap. Those queries represent territory you need to claim by strengthening earned authority, entity signals, or citation architecture in that sub-topic.

Tools that automate share of citation tracking: Profound, Peec AI, Ahrefs Brand Radar, and AuthorityTech AI Visibility Audit. These platforms continuously monitor AI engine responses across query sets and report citation share over time.

Key takeaways

  • Share of citation measures brand authority in AI-mediated discovery. It tells you how often AI engines cite your brand as the answer to high-intent category queries — the Machine Relations replacement for share of voice.
  • Impressions no longer equal visibility. A brand can have massive media presence (high SOV) and near-zero citations (low share of citation) if AI engines do not treat that presence as authoritative enough to cite in answers.
  • Citation is stochastic, not fixed. Best practice: sample repeatedly across time windows (7-9 days minimum) and report share of citation with confidence intervals, not single-run snapshots.
  • Share of citation is driven by the first three layers of the Machine Relations stack: earned authority (Tier 1 placements in sources AI engines trust), entity clarity (consistent, machine-readable identity signals), and citation architecture (formatted, structured, extractable content).
  • Citation gaps define your competitive strategy. Where competitors are cited and you are not, that is query territory you need to reclaim — not by optimizing pages, but by strengthening the underlying authority, entity, and architecture signals AI engines use to decide what to cite.

Frequently asked questions

Is share of citation the same as citation share?

Yes. Share of citation and citation share are interchangeable terms for the same metric. The full phrase "share of citation" emphasizes the replacement relationship to share of voice; "citation share" is the shorthand used in measurement dashboards.

What is a good share of citation?

It depends on category maturity and competitor density. In emerging categories (under 10 credible brands), 20-30% share of citation is leadership territory. In mature categories (50+ brands), 5-10% can be strong performance if query sets are large. The goal is not absolute percentage — it is trend direction. If your share of citation is growing quarter over quarter and competitor share is flat or declining, you are winning.

Can you game share of citation with on-page optimization?

No. Share of citation is not a technical SEO game. Research from Wrodium (2025) found that AI engines cite pages with strong metadata, semantic structure, and valid schema — but only when those pages also come from sources the engines already trust. On-page optimization without earned authority is optimizing the visibility of a brand with no credible signal. Distribution without substance spreads weakness faster. The first move is always earned authority — Tier 1 media placements that establish your brand as a category source worth citing.

How does share of citation relate to Machine Relations?

Share of citation is the primary measurement metric for Machine Relations — Layer 5 of the Machine Relations stack. Where traditional PR measured success by press clip volume and SOV, Machine Relations measures success by citation frequency in AI-generated answers. Share of citation tells you whether the first four layers of the stack (earned authority, entity clarity, citation architecture, distribution) are working — or where they need reinforcement.

What is the difference between share of citation and AI visibility score?

Share of citation measures your brand's percentage of total category citations — relative performance against competitors. AI visibility score measures your brand's absolute presence and accuracy across AI engines — a 0-100 composite metric based on citation rate, entity resolution, sentiment accuracy, and recommendation frequency. Share of citation is competitive positioning. AI visibility score is system-level health. Both matter, but share of citation is the metric that directly maps to market share in AI-mediated discovery.

See how your brand performs in AI search

Free AI Visibility Audit — instant results across ChatGPT, Perplexity, and Google AI.

Run Free Audit