Morning BriefAI Search & Discovery

Yext Measured 17.2 Million AI Citations. The Four Engines Cited Completely Different Sources.

Every AI engine has a distinct citation personality. There is no universal optimization playbook — except the one strategy that works across all four.

Jaxon Parrott|
Yext Measured 17.2 Million AI Citations. The Four Engines Cited Completely Different Sources.

Your CMO just asked: "What's our AI visibility strategy?"

The honest answer, according to the most comprehensive citation study published this year, is that there probably isn't a single one. There are four. And they barely overlap.

Yext analyzed 17.2 million AI citations across ChatGPT, Google's Gemini, Anthropic's Claude, and Perplexity.1 The finding that should reset every AI visibility conversation you're having right now: each platform cites from a completely different source pool. Only about 11% of cited domains appear across multiple engines. The other 89% are platform-specific.1

The AI optimization playbook most brands are building doesn't exist. Not as a single thing, anyway.

What Yext actually found

The study characterized each platform as having a distinct "information personality." Here's what the data showed:

PlatformTop citation sourceReviews shareConsistency across industries
GeminiBrand-owned (51%) + listings (42%) = 93% brand-controlledLowHigh
ClaudeOwned sites (81%) + reviews (15%)2–4x higher than othersModerate
PerplexityBrand-owned (37–50%)LowHighest
ChatGPTVaries by categoryVariesLowest — highest context-sensitivity

Gemini showed the strongest preference for brand-controlled content — 93% of citations came from sources brands manage directly.1 It behaves more like traditional search: authority first, everything else distant second.

Claude behaved differently. Reviews accounted for 15% of its citations — two to four times higher than any other engine.1 Where Gemini rewards what you own, Claude rewards what customers say about you in public. A brand that has invested in owned content but ignored third-party sentiment will look different to these two engines.

Perplexity showed the most consistent behavior across industries. Brand-owned websites made up between 37% and 50% of its citations regardless of sector.1 It favors structured, answer-ready sources — content that resolves questions directly rather than building authority indirectly.

ChatGPT swung the most between categories. In Food & Beverage, first-party websites drove most citations. In other verticals, the pattern shifted significantly.1 ChatGPT is the most context-sensitive engine, adjusting its sourcing logic to what the query actually requires.

Four engines. Four completely different prioritization logics. A strategy built for one will underperform in another.

Why this matters more than most people realize

The practical implication of Yext's data is uncomfortable: you can win in Gemini and still be invisible in Claude. You can optimize for Perplexity's structured source preferences and miss ChatGPT's community-driven signals entirely.

This is the trap the current GEO/AEO market has built itself into. Most vendors are selling a platform-agnostic framework for something that is, by the data, platform-specific. Forrester's 2026 buyer survey found that 94% of B2B buyers now use AI tools during their purchase process.2 Twice as many named generative AI as their most meaningful research source compared to any other channel — including vendor websites, product experts, and sales.2 Bain's consumer research from December 2024 found that 80% of search users rely on AI summaries at least 40% of the time, with roughly 60% of searches now ending without the user clicking through to a website.3 Your buyers are running across all four of these engines, not just one. The company visible across every AI answer gets considered. The company that optimized for one gets a blind spot.

The natural response to this data is to build four separate programs. Get your owned content right for Gemini, build community reviews for Claude, create structured answer-ready pages for Perplexity, and figure out whatever ChatGPT's category-specific logic requires. That's not a strategy. That's maintenance across four systems with different rules, all of which will update without notice.

There's a different way to read the data.

The thing that works across all four

When you look at what Yext's data actually rewards, one pattern runs underneath all of it: editorial credibility.

Gemini's brand-controlled source preference rewards authoritative owned content — but the Yext data also shows that even Gemini's local citations include news coverage. Claude's review weighting reflects what third parties say in public. Perplexity's structured sourcing preference rewards content credible enough to be cited by a system that doesn't take things at face value. ChatGPT's category-sensitivity rewards whatever is considered most authoritative in that specific space.

What each engine is trying to find, through different mechanisms, is the same thing: sources worth trusting.

The Muck Rack Generative Pulse analysis of over one million AI prompts found that 85% of AI citations come from earned media sources — with press releases accounting for just 1% despite a 5x growth in press release distribution.4 The Fullintel/UConn academic study presented at IPRRC found that 89% of links cited by AI engines were earned media, with 95% being unpaid.5 The Ahrefs ChatGPT citation analysis found that 65.3% of cited pages come from DR80+ domains — authority built through editorial credibility, not optimization.6

That pattern doesn't shift because Gemini and Claude weight sources differently. Both engines still need to assess whether a source is trustworthy. Earned media placements in established publications are among the only signals that resolves that question the same way across all four engines.

This is the dynamic the citation economy runs on — not which platform you optimized for, but which publications already have the trust that platforms defer to. The Stacker analysis put it directly: "Media relations are becoming machine relations. The patterns of AI are the same patterns that determine editorial credibility." 7 The Signal Genesys study of 179.5 million citation records across six LLM platforms found that Perplexity drives the largest citation volume — with 88.4% domain citation coverage for sources that had earned press release distribution reaching authoritative outlets.8

The flaw in platform-specific optimization

Building an AI visibility program around one engine's citation logic is optimizing for a moving target.

Yext's own data shows Claude cited reviews 2–4x more than other platforms — but that ratio will shift as Anthropic updates the model.1 Gemini's 93% brand-controlled citation rate reflects a current weighting, not a permanent one. Perplexity dropped its advertising business late last year because it concluded it "undermined users' trust in their answers' accuracy," per TechCrunch.9 Their citation behavior will change with their product strategy. Gartner projected a 25% decline in traditional search volume by 2026 — and that was a 2024 forecast, before AI Overviews were on most queries.10 The individual platform weighting is not the stable fact. The direction of travel is.

The Yext report itself acknowledges this: "A brand that appears in answers from one model may not appear in another if each platform draws on different source types."1 Their recommendation is to pursue model-specific visibility strategies. That's sound tactical advice. But the question underneath it is: what's the substrate those tactics run on?

The GEO-16 academic framework — which audited 1,702 citations across Brave, Google AIO, and Perplexity — found that page quality signals matter significantly, but also noted that "generative engines heavily weight earned media and often exclude brand-owned and social platforms."11 Even strong on-page quality doesn't guarantee citation if the page sits solely on a vendor blog.

The answer is the same thing that determined whether a brand was trusted before AI search existed: earned placements in the publications your category treats as authoritative. Those publications were indexed as reliable sources long before any of these citation studies were run.

What changes about your approach

The Yext data doesn't make platform-specific optimization irrelevant. Knowing that Claude rewards reviews, Gemini favors owned structure, and Perplexity wants clean answer-ready content is operationally useful. If you have nothing else, it tells you where to start.

But it tells you something more important. There's no version of this where you win across all four engines without editorial authority that sits above any single platform's weighting logic. You need to be the brand that gets written about in the publications that every major AI engine already trusts.

The OtterlyAI report on the AI citation economy found that 73% of sites have technical barriers blocking AI crawler access — which means most brands are invisible before the citation weighting even comes into play.12 That's what Machine Relations is — the discipline of building citation infrastructure that predates and outlasts any specific AI platform's current sourcing preferences. PR's original mechanism (earned media in trusted publications) was always what this ran on. The Yext study just confirmed that it's the only mechanism that scales across what the AI landscape has become.

Before building four separate platform-specific programs, check whether your brand is present in the publication layer these engines all agree on. That's where the audit starts: app.authoritytech.io/visibility-audit

Related Reading


Footnotes

  1. Yext, "AI Citation Behavior Across Models: Evidence from 17.2 Million Citations," January 2026. https://www.yext.com/research/ai-citation-refresh-january-2026 2 3 4 5 6 7 8

  2. Forrester, "2026 Buyer Insights: GenAI Is Upending B2B Buying," January 21, 2026. https://investor.forrester.com/news-releases/news-release-details/forresters-2026-buyer-insights-genai-upending-b2b-buying-leaders/ 2

  3. Bain & Company, "Consumer Reliance on AI Search Results," December 2024. https://www.bain.com/about/media-center/press-releases/20252/consumer-reliance-on-ai-search-results-signals-new-era-of-marketing--bain--company-about-80-of-search-users-rely-on-ai-summaries-at-least-40-of-the-time-on-traditional-search-engines-about-60-of-searches-now-end-without-the-user-progressing-to-a/

  4. Muck Rack Generative Pulse, "Earned Media Still Drives Generative AI Citations," December 2025. https://generativepulse.ai/whatisaireading

  5. Fullintel/UConn, "AI Media Citations: Credible Journalism Study," IPRRC, February 2026. https://fullintel.com/blog/ai-media-citations-credible-journalism/

  6. Ahrefs, "ChatGPT's Most Cited Pages," 2025. https://ahrefs.com/blog/chatgpts-most-cited-pages/

  7. Stacker, "Media Relations Are Becoming Machine Relations," February 4, 2026. https://stacker.com/blog/media-relations-are-becoming-machine-relations-and-most-brands-arent-ready

  8. Signal Genesys, "How Press Release Distribution Drives LLM Citations," January 2026. https://signalgenesys.com/how-press-release-distribution-drives-llm-citations-signal-genesys-study/

  9. TechCrunch, "Perplexity's new Computer is another bet that users need many AI models," February 27, 2026. https://techcrunch.com/2026/02/27/perplexitys-new-computer-is-another-bet-that-users-need-many-ai-models

  10. Gartner, "Gartner Predicts Search Engine Volume Will Drop 25% by 2026 Due to AI Chatbots," February 2024. https://www.gartner.com/en/newsroom/press-releases/2024-02-19-gartner-predicts-search-engine-volume-will-drop-25-percent-by-2026-due-to-ai-chatbots-and-other-virtual-agents

  11. Kumar et al., "GEO-16: A 16-Pillar Auditing Framework," arXiv, September 2025. https://arxiv.org/abs/2509.10762

  12. OtterlyAI, "The AI Citation Economy: 1M+ Data Points," February 2026. https://otterly.ai/blog/the-ai-citations-report-2026/