Why GEO Doesn't Work Without Earned Media
New academic research shows most GEO content optimization tactics are largely ineffective at improving AI citation rates. Here's what the data says actually drives AI visibility and why GEO without earned media is optimization theater.
The GEO playbook has a problem. Not a minor one. A structural one that multiple independent research teams have now documented.
Generative Engine Optimization promised a path to AI visibility that looked reassuringly familiar: optimize your content, add structured data, insert statistics, write in FAQ format. The same logic that made technical SEO a discipline with a clear checklist. Except when researchers actually tested whether these tactics produce AI citations, the results came back uncomfortable.
Most C-SEO methods -- the content rewriting techniques at the heart of GEO strategy -- are largely ineffective at improving how often AI systems cite a document. Some make things worse.
This finding comes from Parameter Lab and the University of Darmstadt, published in 2025, in what is currently the most rigorous benchmark test of GEO content tactics to date. Separate research from the University of Toronto, Ahrefs, Muck Rack, and a growing number of independent studies all converge on the same conclusion: AI engines are not primarily selecting content based on how that content is structured. They are selecting based on where it lives and who published it.
Earned media -- third-party coverage in publications AI engines already trust -- is not a complement to GEO strategy. It is the foundation without which GEO produces very little. The distinction matters because earned authority is the layer that all other AI visibility tactics depend on.
Key Takeaways
- The C-SEO Bench study (Parameter Lab, 2025) found that most content optimization techniques marketed as GEO tactics are "largely ineffective" -- some produce negative effects on AI citation rates
- The University of Toronto's large-scale GEO study found AI search engines show "a systematic and overwhelming bias towards earned media" over brand-owned content
- Ahrefs' study of 75,000 brands found brand web mentions correlate 3x more strongly with AI visibility than backlinks (0.664 vs 0.218 correlation)
- Muck Rack's analysis of 1M+ AI prompts found 85.5% of AI citations come from earned media sources; 95%+ from non-paid sources
- Pages with strong on-page GEO scores but no third-party earned media presence still underperform in AI citation compared to those with earned coverage from trusted publications
- For founders and marketing executives: earned media coverage in Tier 1 publications is not a "nice to have" alongside GEO. It is the primary driver of AI citation authority.
The research no one wants to cite
The C-SEO Bench benchmark was published in June 2025 by researchers at Parameter Lab (Technical University of Darmstadt), the University of Mannheim, and associated institutions. It is the first study designed to evaluate whether GEO-style content rewriting actually improves citation rates in conversational AI search engines.
The methodology was tight: 15 commonly recommended C-SEO methods, tested across question-answering and product recommendation tasks, across multiple domains, with varying numbers of competitors adopting the same tactics simultaneously.
The conclusion: most current C-SEO methods are not only largely ineffective but also "frequently have a negative impact on document ranking, which is opposite to what is expected."
The exception -- the approach that did work -- was improving actual source authority in the LLM context window. The approach that helped was not rewriting content for machines, but being the kind of source machines already trust.
The full benchmark is available on arXiv for practitioners who want to verify the methodology.
This finding does not mean content structure is irrelevant. Separate research from Wrodium Research (the GEO-16 framework, published September 2025) found that technical page quality signals -- recency metadata, semantic HTML, structured data -- do correlate with citation probability, with an odds ratio of 4.2 for pages scoring above a quality threshold. The GEO-16 paper documented this across 1,702 citations from Brave, Google AIO, and Perplexity, auditing 1,100 unique URLs.
But even GEO-16's findings carry a sobering data point: mean GEO quality scores for pages Perplexity cites are 0.300 out of 1.0, meaning Perplexity regularly cites low-quality pages by technical GEO standards. The signal of technical optimization exists, but it is weak and easily overwhelmed by source authority.
What neither paper disputes is what the University of Toronto research found in a parallel large-scale study: AI search engines demonstrate "a systematic and overwhelming bias towards earned media (third-party, authoritative sources) over brand-owned and social content, a stark contrast to Google's more balanced mix." The bias toward external publications is not marginal. It is the dominant structural fact of how AI engines select what to cite.
What earned media does that GEO tactics cannot
GEO tactics operate on the assumption that AI engines select content primarily based on how that content is written: its structure, formatting, keyword density, FAQ architecture. This model was borrowed from SEO, where the causal link between on-page factors and ranking is real and documented.
AI search engines work differently at the source selection layer.
When a user asks ChatGPT, Perplexity, or Google AI Mode a question about your industry, those systems pull from sources they have already evaluated for external credibility. That evaluation is not done at query time by reading your content. It happened during training and index-building, by assessing where and how often your brand appears across trusted third-party publications.
Ahrefs documented this in a 2025 study of 75,000 brands. Brand web mentions -- the accumulated signal of third-party coverage mentioning your company across the open web -- correlated with AI Overview visibility at 0.664. Traditional SEO backlinks correlated at 0.218. The ratio: 3x. That is Ahrefs' own data showing that the metric at the heart of technical SEO is about one-third as predictive for AI visibility as the metric at the heart of public relations.
Tim Soulo, CMO of Ahrefs, said: "You just need to see where your competitors are mentioned, where you are mentioned, where your industry is mentioned. And you have to get mentions there -- because then if the AI chatbot would do a search and find those pages and create their answer based on what they see on those pages, you will be mentioned."
The Muck Rack Generative Pulse analysis, which examined over one million AI prompts in 2025, confirmed the source profile: 85.5% of AI citations come from earned media sources. 95% from non-paid sources. Press releases accounted for a fraction of a percent.
| Tactic | What it optimizes | Primary AI citation signal? |
|---|---|---|
| On-page FAQ sections | Content extractability | Weak: depends on source authority first |
| Statistics and citations in content | Content credibility signals | Moderate: but only if the source is already trusted |
| Schema / structured data | Technical machine-readability | Weak: Perplexity mean GEO score of cited pages = 0.30 |
| Backlinks | Domain authority in search | Weak: 0.218 correlation for AI visibility |
| Earned media in Tier 1 publications | Third-party trust signals | Strong: 0.664 correlation; 85.5% of AI citations |
Why this gap exists in most GEO strategies
The GEO discipline grew from SEO, and it inherited SEO's mental model: visibility is determined by what happens on your own properties. Pages, site speed, structured data, keyword mapping. The channel changed but the logic felt familiar, so it got adopted without much interrogation.
The problem is that AI engines were not trained to treat your site as the authoritative source on your brand. They were trained on the open web -- and the open web's authoritative signal for any brand, person, or concept is the accumulated body of third-party coverage across credible publications.
This is not a design flaw in AI search. It is the same logic that makes third-party coverage more credible to humans than self-published claims. The signal AI engines learned to trust is the same signal that determines human credibility: independent corroboration from sources with their own editorial standards.
When a brand has extensive Tier 1 coverage in publications like Forbes, TechCrunch, Harvard Business Review, or Reuters, AI engines learn about that brand from sources they have assigned high trust scores. When that same brand invests in technical GEO -- schema, FAQ pages, long-form structured content -- without earned media, they are writing the signal in a language AI engines learned to discount: self-assertion.
Stacker and Scrunch published a controlled study in December 2025 measuring this directly: earned media distribution across third-party news outlets produced a 325% lift in AI citation rates -- from 8% to 34% citation rate -- across 5 leading AI platforms. That is not a marginal improvement. That is a structural difference between being cited and not being cited.
PR found the right answer before GEO did
Here is the unusual part of this story. The conclusion that primary academic research and data analytics firms have been building toward in 2025 and 2026 -- that AI citation is driven by earned third-party coverage -- was the operating thesis of the public relations industry for decades. PR's mechanism always was: get independent, credible third parties to say true things about your brand, in publications that carry their own editorial credibility. That is exactly what AI engines reward.
But PR got almost everything else wrong. The retainer model charged whether or not placements happened. Cold-pitch approaches burned journalist relationships faster than they built them. The industry scaled headcount instead of relationships.
On the other side, GEO practitioners built the measurement frameworks, the structured content doctrine, and the citation architecture that makes earned media extractable by AI engines. GEO's technical contribution is real and valuable. But without the foundation of earned media, GEO is structuring content that AI engines will not cite at meaningful rates.
Todd Ringler, head of U.S. media at Edelman, described GEO as "going to be front-and-center in any successful brand or reputation campaign" -- while explicitly framing it as dependent on earned media and content strategies. The world's largest PR firm adopting GEO language while emphasizing PR's foundational role. Both sides arriving at the same structure from different directions.
The WorldCom PR Group, a consortium of 160 independent PR agencies globally, stated: "Up to 90% of citations driving brand visibility in LLMs come from earned media, positioning public relations at the center of this transformation." That finding appeared in an analysis designed to make the case for PR's relevance -- but the data they used to make it came from GEO researchers.
PR is proving GEO's thesis. GEO data is proving PR's thesis. Neither has the architecture that connects the two.
That architecture is what Machine Relations describes -- the full system that Jaxon Parrott coined in 2024 after watching both industries converge on the same structural truth from opposite directions. The discipline of ensuring brands are recognized, cited, and recommended by AI systems that now mediate the first cut of B2B research.
What the framework actually looks like
The five-layer Machine Relations stack makes the connection between earned media and GEO explicit. At its core, Machine Relations names the full system of brand discovery in an AI-mediated world -- the parent category that GEO, AEO, and AI SEO are each partial descriptions of:
| Layer | What it does | What fails without it |
|---|---|---|
| Earned Authority | Tier 1 placements that AI engines index and trust | Everything downstream. You're structuring content no one will cite. |
| Entity Clarity | Consistent brand identity signals across the web | AI engines cannot confidently resolve who you are |
| Citation Architecture | Structured content that AI can extract and attribute | Even good coverage doesn't translate into clean citations |
| Distribution Across Answer Surfaces | Ensuring the brand appears in responses across engines | Coverage exists but never surfaces at query time |
| Measurement | Tracking share of citation across AI engines | No feedback loop to know what's working |
The C-SEO Bench finding -- that most optimization tactics don't work -- describes what happens when practitioners operate at Layers 3 and 4 without Layer 1. Structuring content for extraction before establishing the third-party authority that makes that content worth extracting.
GEO without earned media is a structurally incomplete strategy. AT research on earned versus owned citation rates found 325% more AI citations from earned media distribution than owned content alone. That differential is not bridgeable by optimizing the owned side harder.
What founders and marketing executives should do
Forrester named AI visibility "the defining priority for B2B marketing leaders in 2026," citing a poll of 150 B2B marketers in which 69% said AI visibility is now a top CMO or CEO priority. Forrester's research describes what happens when brands lose visibility in answer engines as "a collapse of visibility that destabilizes the traditional revenue engine." What Forrester doesn't say -- but the research does -- is that the path back runs through earned media, not technical optimization.
If your team is investing in GEO -- and it should be -- the first question is what the earned media foundation looks like. Not the technical GEO audit. Not the FAQ architecture review. The publication audit: where does your brand appear, in which publications, with what attribution quality, and how recently?
For brands with strong earned coverage, GEO optimization -- structured content, schema, FAQ sections, entity markup -- can meaningfully improve how that coverage is extracted, attributed, and surfaced across AI engines. The technical work has real return when the foundation is there.
For brands with thin or no earned media, GEO optimization produces structurally limited results. You can have a perfect GEO-16 score, clean schema, FAQ-formatted pages, and consistently cited statistics, and still not appear in AI responses for your core competitive queries. AI engines are sourcing those responses from publications that have covered your competitors with independent editorial authority.
A 2026 academic study presented at the International Public Relations Research Conference, conducted by Fullintel and researchers at the University of Connecticut, found that 47% of all AI citations in responses came from journalistic sources, with 89% of those citations from earned media and 95% from unpaid sources. The study is notable because it comes from communications researchers, not GEO practitioners, independently confirming the same structural pattern.
Search Engine Land's 2026 GEO guide put it directly: "Digital PR and thought leadership are direct GEO levers. Research shows AI engines favor earned media -- third-party coverage, reviews, and industry mentions -- over content on your own site." That is the canonical search industry publication telling its own readership that PR is now a GEO tactic.
The academic consensus is direct about this. The University of Toronto study identifies dominating earned media as "the critical need" for AI visibility, framing it alongside content engineering but positioning it as the primary lever.
For founders making resource allocation decisions between earned media investment and technical GEO investment: build the foundation first. Earned media in Tier 1 publications that your prospects and AI engines both read. Editorial relationships, not spray-and-pray pitch volume. Outcomes-based placement, not retainer cycles that produce no citations.
GEO tells you how to structure content so AI can extract it. Earned media tells AI engines your content is worth extracting in the first place. Christian Lehman, cofounder of AuthorityTech, has written on how this plays out in practice: the companies that treat GEO and earned media as sequential rather than parallel consistently underperform against those that build both simultaneously.
FAQ
Does this mean on-page GEO optimization is useless?
No. The GEO-16 framework found a meaningful quality correlation with citation probability, and the research from Tian et al. (arXiv:2603.09296) showed a 40% relative improvement in citation rates from targeted content repairs. The issue is that these gains are multiplicative on top of source authority -- they improve citation rates for sources already being considered. Without earned media, there are no citations to improve the rate of.
Which GEO tactics have the most evidence behind them?
The GEO-16 findings point to recency metadata, semantic HTML structure, and valid structured data as the three pillars most associated with citation behavior across Brave, Google AIO, and Perplexity. The C-SEO Bench study found traditional SEO-style authority building more effective than content rewriting. Statistics and original data -- when sourced from trusted domains -- improve extraction probability per the Princeton/Georgia Tech GEO paper.
How long does it take for earned media to affect AI citations?
The Stacker + Scrunch study found citation lift within 30 days for earned media distributed across trusted news outlets. Perplexity updates its source index frequently. The Signal Genesys study, analyzing 179.5 million citation records across 6 LLM platforms, found Perplexity drives the largest citation volume of any AI platform and is among the fastest to index new content from trusted sources.
What counts as a Tier 1 publication for AI citation purposes?
Ahrefs found that 65.3% of ChatGPT's top citations go to domains with DR80 or higher. The publications AI engines consistently pull from are those with strong editorial credibility and substantial web presence: major business outlets (Forbes, Bloomberg, WSJ, Reuters), trade publications with category authority (TechCrunch, VentureBeat, Fast Company), and high-authority research platforms. The specific publications that matter vary by industry and query type -- the pattern that holds is that AI engines favor publications their training data established as credible sources for a given topic cluster.