Your Content Hits 5% AI Coverage. Earned Distribution Gets You to 18%. New Study Shows the Gap.
A new 87-story, 8-platform study from Stacker and Scrunch found earned media distribution produces a median 239% lift in AI citations. The tactical playbook for coverage breadth — the new GEO KPI your team isn't tracking.
You ran the audit. Your brand shows up in ChatGPT when you search your category question. Then you check Perplexity. Different results, different sources. Then Gemini. Inconsistent again.
That variance has a name now. Stacker and Scrunch published research this week — the largest controlled GEO study to date — and the finding operators need to act on is this: cross-platform citation consistency is the metric that predicts AI visibility performance, and earned media distribution is the lever that moves it.
Here's the data and what to do with it.
What the new Stacker/Scrunch research found
Published March 16, 2026, the Stacker/Scrunch study analyzed 87 stories across 30 brands, queried 2,600+ prompts across 8 AI platforms, and measured two things simultaneously: citations to the brand's own domain, and citations to publisher sources in the distribution network. Methodology held up to statistical significance testing (p < 0.006) — the first controlled study at this scale to do so.
The key findings:
| Metric | Result |
|---|---|
| Median lift in AI citations from earned distribution | 239% |
| Cross-platform AI coverage (unassisted) | 5.4% |
| Cross-platform AI coverage (with earned distribution) | 17.9% |
| Stories earning at least one AI citation (distributed) | 97% |
| Stories earning at least one AI citation (owned only) | 82% |
| AI citations from third-party publisher sources | 64% |
| Distributed versions as sole source of AI visibility | 5.3x more likely than brand's own site |
That last row is the one to read twice. The brand created the content. The publisher earned the citation. Owned content starts the process. Distribution is what makes AI engines act on it.
This is Stacker's second study on this question — the first, in December 2025, showed 325% lift across eight stories. The March 2026 study scaled that by 10x and came back with a more conservative but statistically validated 239%. The data is no longer directional. It's predictive.
Coverage breadth: the GEO KPI your team isn't tracking
The study introduces a metric that should be in every GEO reporting deck: coverage breadth. It measures not whether your brand was cited in one AI engine, but how consistently it surfaces across multiple platforms for the same underlying question.
Stacker CEO Noah Greenberg: "AI search isn't a single ranking position; it's a long tail played across platforms, prompt variations, and answer formats. Our data shows that coverage breadth is the new authority signal."
If you're currently measuring ChatGPT citation rate, you're tracking one data point in a probabilistic game played across eight platforms. A brand with 40% citation rate on ChatGPT but 3% on Perplexity and Gemini has a coverage breadth problem — and the inconsistency is costing it pipeline.
The benchmark from this research: unassisted brand content sits at 5.4% cross-platform coverage at the median. With earned distribution, that climbs to 17.9%. That gap is the working surface area of most teams' AI citation problem.
This matters alongside share of citation — the metric that tracks how often your brand appears as a named answer for your target queries relative to competitors. Coverage breadth explains why share of citation varies by platform, and what to do about it.
Why your blog can't solve this
The issue is not content quality. A well-researched piece on your own domain is necessary. It's not sufficient.
The Stacker data found that distributed versions were 5.3x more likely to be the sole source of a story's AI visibility than the brand's own site. In most cases where a story got cited, it was the third-party publisher version doing the work.
The reason is structural. AI engines treat earned media differently from self-published content because the publication is the credibility signal. A brand-owned page asserting expertise is self-assertion. The same claim in a recognized outlet that applied editorial standards to it is third-party validation. These produce different citation probabilities regardless of content quality.
The research base supporting this pattern is now substantial:
- Muck Rack's Generative Pulse analysis, which examined over one million AI-cited links across ChatGPT, Gemini, and Claude, found that more than 85% of non-paid AI citations come from earned media sources
- Moz's analysis of 40,000 queries found that 88% of Google AI Mode citations are not in the organic SERP — AI citation behavior operates largely independent of search rankings
- Bain's 2025 consumer research found that 80% of search users now rely on AI summaries at least 40% of the time — the audience these citations reach is no longer a side channel
- The Princeton/Georgia Tech GEO study (Aggarwal et al., SIGKDD 2024) found that adding statistics and credible source citations each improves AI visibility by 30–40%
The combination — distribution to credible publishers, with well-structured content — is what produces the citation lift the Stacker study measured. AT's research on this pattern is documented at machinerelations.ai.
The execution playbook
Step 1 — Measure coverage breadth before you change anything. Run your top five category queries across ChatGPT, Perplexity, and Gemini with web search enabled. For each query, note which platform cited you, which didn't, and which publications showed up in the citations. Build a simple table: query, AI engine, your citation status, competitor citation status, dominant outlet type. That table is your baseline. The gap between platforms is your coverage breadth deficit.
Step 2 — Match distribution targets to where AI cites in your category. Not all coverage produces citation lift. The Stacker study found that distribution network quality matters as much as content quality — specifically whether the publishers are ones AI engines already treat as credible in your category. After Step 1, you'll see 5–8 outlets that appear consistently in citations for your category questions. Those are the distribution targets. Coverage in outlets that don't appear in those patterns has limited coverage breadth value regardless of that outlet's traditional metrics.
Step 3 — Build for consistent velocity, not periodic campaigns. The Stacker data found that earned distribution's share of AI visibility was highest where a brand's existing AI footprint was smallest. AI citation ecosystems weight recency and specificity more than accumulated domain authority — a current story distributed this week can surface ahead of a competitor's year-old piece in a bigger outlet. The practical cadence: 2–4 placements per month on a consistent basis outperforms a single quarterly push.
The failure mode most teams fall into
Confusing earned media mentions with earned media distribution.
A press release on a wire service is not this. A logo mention in a trade roundup is not this. The citation infrastructure that produced the 239% lift came from stories republished across a network of credible publisher domains — not covered once and left to a single URL.
Coverage breadth is the measure of how widely and credibly a story exists across the web, not how many times it was mentioned. AI engines encounter the same story in multiple contexts and treat that multi-domain presence as a stronger signal than a single high-authority placement.
One placement gives a model one data point. The same story across fifteen credible domains gives the model a pattern it treats as established fact. That distinction is what the 5.3x multiplier is actually measuring.
Key takeaways
Coverage breadth is now a GEO reporting metric. If you're only tracking citation rate on one platform, you're missing the metric that predicts consistency across all of them.
Owned content is the starting line, not the finish. Brand-site content averaged 5.4% cross-platform citation coverage before distribution. The 17.9% median after distribution is the gap earned placement closes.
Distribution network quality determines citation quality. Getting placed in credible publisher domains your category's AI engines already cite is the operative variable — not earned media volume in general.
Challenger brands have the most upside right now. The Stacker data found distribution's share of AI visibility was highest where existing brand AI footprints were smallest. AI search is not yet a game of incumbent advantage. The window is open.
Why distribution works at the infrastructure level
Machine Relations — the discipline of ensuring your brand gets cited by AI systems rather than buried by them — sits on one foundational observation: AI engines use the same signal that made earned media valuable in the first place. Placements in publications that editorial standards vouch for are what these systems pull when they build answers. The publications haven't changed. What changed is that machines are now the primary reader of what gets published in them.
The Stacker data quantifies that at scale. Distribution to credible publishers tripled coverage breadth because AI systems are running a corroboration check: does this brand appear in multiple credible contexts, or only on its own site? The 5.3x multiplier is the difference between self-assertion and multi-domain validation.
For operators, the question is no longer whether earned media matters for AI visibility. The research on that question is settled. The question is whether you're distributing the coverage you have in a way that builds actual coverage breadth — or leaving a 239% citation lift on the table.
Start with the diagnostic. Run those five category queries across three platforms today. The gaps you find are the roadmap.
To see exactly where your brand stands in AI responses before your next distribution decision, run the visibility audit here.