Do Press Releases Help With AI Visibility? What 1+ Million Citations Reveal

Do Press Releases Help With AI Visibility? What 1+ Million Citations Reveal

Press release citations grew 5x in 2025—yet still represent just 1% of what AI engines cite. Here's what the data says, and what actually drives AI visibility in 2026.

You just paid $750 for a PR Newswire distribution. The release went to 5,000 news sites. And then you searched ChatGPT for the top companies in your category—and your brand still wasn't there.

This isn't a bug. It's the fundamental disconnect between how press releases work and how AI engines decide what to cite.

The answer to "do press releases help with AI visibility?" is: technically yes, practically almost never. Here's what the research shows, why the gap exists, and what founders actually need to build AI citation authority in 2026.


How AI Engines Actually Decide What to Cite

Before evaluating press releases, you need to understand the citation selection mechanism.

A peer-reviewed study published at EMNLP 2025—one of the top natural language processing conferences—found something that most PR teams don't know: for large language models, media source reputation matters more than content quality when generating citations. The research, from teams at Renmin University of China, the National University of Singapore, and the Chinese Academy of Sciences, analyzed how LLMs select sources and found that the name of the media outlet—its established authority signal—is the primary driver of citation selection, not the quality of the content itself.

In parallel, academic researchers analyzed over 366,000 citations from more than 24,000 conversations across OpenAI, Perplexity, and Google AI search systems (arXiv, 2025). Their finding: AI news citations concentrate heavily among a small number of outlets. You're not competing against every indexed webpage. You're competing to appear in a narrow tier of publications that AI engines have already decided to trust.

This is not an algorithm you can game with metadata. It's a trust hierarchy—and it was baked into these models during training.


What the Research Shows About Press Release Citations

The most comprehensive data on press releases and AI citations comes from Muck Rack's Generative Pulse research—one of the only large-scale longitudinal studies tracking how generative AI models cite sources in real time.

Their December 2025 findings, based on analysis of over one million AI citations across GPT, Gemini, and Claude, are striking:

  • Press release citations grew 5x between July and December 2025—from 0.2% to 1% of all AI citations
  • Press releases overall moved from roughly 1.2% to 6% of citations during the same period (including syndicated press release content on news aggregators)
  • Earned media accounts for 82% of all links cited by AI engines
  • Journalistic sources represent approximately 27% of all citations overall, jumping to 49% for queries that imply recency (breaking news, latest developments, timely topics)

The growth in press release citations is real. But the ceiling is visible: after 5x growth, press releases still represent just 1% of what AI engines cite. Meanwhile, earned editorial coverage—stories where a journalist actually chose to cover you—represents the overwhelming majority of the remaining citations.

This isn't a coincidence. It reflects how AI models were trained.


Why Earned Editorial Coverage Dominates

Generative AI models were trained on internet-scale text data—and that data is heavily weighted toward journalism. The models absorbed the structure, sourcing norms, and authority signals of editorial content before they ever saw a press release syndication page.

The Muck Rack research confirms what this training history predicts: major models cite journalism in nearly half of all responses requiring recency. And when the Nieman Journalism Lab analyzed which specific outlets these models prefer, the pattern was unmistakable: Reuters, the Financial Times, Forbes, and Axios consistently appeared in the top-cited lists for both ChatGPT and Gemini (Nieman Lab, July 2025).

These aren't outlets where you can buy a placement. They're outlets where journalists decide you're worth covering—based on your story, your credibility, your news value, and (increasingly) whether you've built the kind of institutional reputation that makes you citable.

The arXiv research reinforces the concentration effect: news citations don't just favor established journalism generally; they cluster among a remarkably small set of high-authority outlets. Getting your press release indexed across 5,000 syndication partners doesn't help if those partners aren't in the tier AI trusts.


The Wire Distribution Problem

Here's the mechanism that explains the 1% ceiling.

When you distribute a press release via PR Newswire or Business Wire, the primary copy lands on your wire service's site (which does have AI indexing visibility) and syndicates to hundreds or thousands of local newspaper sites, aggregators, and republishing partners. Most of these destinations have low domain authority. They're not the sites AI engines were trained to prioritize.

The EMNLP 2025 research finding applies directly here: AI models cite sources based on the reputation of the outlet, not on the content itself. A well-written press release published on Podunk LocalNews Aggregator doesn't carry the authority signal of the same story covered by a Forbes journalist who independently chose to write about it.

The difference matters because:

  1. Brand-generated content (your press release) carries no third-party validation. AI engines, trained on journalism, have learned to distinguish between content that was chosen by an editor versus content a company self-published with wire distribution.
  2. Syndication ≠ editorial coverage. Thousands of sites republishing the same release looks like spam to AI crawlers trained on journalistic diversity.
  3. The outlet matters more than the content. Even a compelling press release distributed professionally lands in a citation tier far below a brief mention in the Financial Times.

The December 2025 Muck Rack data captures this precisely. Press releases grew 5x—but the growth likely reflects wire services improving their AI indexing and structured data. The actual editorial authority that drives AI recommendations didn't move. Earned editorial coverage remained at 82%.


The Outlet Concentration Problem

Even if you accept that journalism is what AI cites, the next challenge is which journalism.

The academic citation pattern research makes this uncomfortably specific: AI search systems don't distribute news citations evenly across quality journalism. They concentrate. The top outlets capture a disproportionate share of all AI news citations, and smaller publications—even legitimate editorial operations—rarely appear in AI-generated responses.

This has a direct implication for PR strategy. Getting covered in a regional business journal, a mid-tier trade publication, or a niche industry newsletter is valuable for many reasons—but it's unlikely to drive meaningful AI citation volume. The publications that get cited most by AI search engines in 2026 are a specific, identifiable tier: tier-one business press, major tech outlets, and specific industry authorities that AI engines have internally ranked as high-trust.

Your PR strategy needs to target those outlets specifically—not just "media coverage" generally.


What Actually Works: The Editorial Placement Equation

The research converges on a clear model:

AI citation authority = editorial placements in the specific publications AI engines trust

This is different from:

  • Press release distribution (too low-authority per outlet)
  • Guest posts on brand-selected sites (no independent editorial validation)
  • Thought leadership on platforms AI is less likely to index (LinkedIn, newsletters)
  • Blog content on your own domain (company-owned content, minimal AI citation share)

What works is being chosen—by journalists at outlets AI engines trust—to appear in editorial content. That requires relationships, not software. It requires understanding which publications matter for AI citation authority (not the same as which publications matter for human traffic). And it requires a PR model that's accountable to actual placements, not activity.

This is the model earned media dominates AI search results research points to: not more content, not more distribution, but the right editorial relationships producing the right placements in the right outlets.

For founders evaluating how to build AI visibility, the question isn't "should I use press releases?" The question is: "Am I in the publications AI engines actually cite?" If not, no amount of wire distribution will change that.


What This Means for Your 2026 PR Budget

The data suggests a framework for allocating PR resources around AI citation goals:

High ROI for AI visibility:

  • Direct editorial placements in tier-one business, tech, and industry publications
  • Media relationships that produce original journalist-driven coverage
  • Consistent presence in the specific outlets your ICP's AI searches will cite

Low ROI for AI visibility:

  • Press wire distribution for routine announcements (product launches, hiring news, funding rounds below major thresholds)
  • Syndication-heavy distribution strategies
  • Content designed for AI optimization without independent editorial validation behind it

Some ROI for AI visibility:

  • Press releases tied to genuinely newsworthy events (major funding rounds, significant partnerships, original data) that journalists might pick up and write original coverage about
  • Wire distribution to tier-one wires (PR Newswire, Business Wire) where the wire's own site has some indexing authority—but only as a complement to editorial outreach, not a substitute

Press releases aren't useless. They're just not the mechanism that drives AI citation authority. That mechanism is getting cited in AI search through earned media strategy—and it requires the editorial relationships that wire services can't provide.


The Machine Relations Conclusion

The reason this matters goes beyond vanity metrics. AI search is now part of the B2B buying process. When a buyer opens ChatGPT or Perplexity to research solutions in your category, they see a curated set of brands—the ones that editorial institutions have validated and AI has learned to trust.

If you're not in that set, you're invisible at exactly the moment a potential customer is forming their shortlist.

Press releases can support a media strategy. But they cannot substitute for the editorial relationships that produce AI citation authority. The companies winning in AI search in 2026 aren't the ones with better wire distribution—they're the ones that have built genuine earned media presence in the publications AI engines were trained to trust.

That's what Machine Relations looks like in practice: systematic earned media in the right outlets, producing the citation signals AI engines use to recommend vendors, tools, and experts to buyers who are actively searching.


FAQ

Do press releases help with AI visibility?

Press releases can contribute marginally to AI visibility. Research from Muck Rack's Generative Pulse shows press release citations grew 5x between July and December 2025—but still represent just 1% of all AI citations. Earned editorial coverage (82% of AI citations) is significantly more effective.

What types of content does AI cite most?

According to Muck Rack's analysis of 1+ million AI citations, earned media accounts for 82% of all links cited by AI engines. Journalism represents ~27% of all citations, rising to 49% for queries implying recency. Brand-owned content (press releases, company blogs, official websites) represents a small minority.

Which publications does ChatGPT cite most often?

Muck Rack's research, reported by the Nieman Journalism Lab, identified Reuters, the Financial Times, Time, Forbes, and Axios as top-cited outlets for both ChatGPT and Gemini. AI citation authority concentrates heavily among a small tier of established editorial outlets.

Why do AI engines favor editorial coverage over press releases?

Peer-reviewed research published at EMNLP 2025 found that LLMs prioritize media source reputation over content quality when generating citations. AI models trained on journalism have internalized editorial authority signals that press releases—brand-generated, un-validated content—simply don't carry.

What's the difference between press release distribution and earned media placement?

Wire distribution places your content on hundreds of syndication sites without independent editorial validation. Earned media placement means a journalist chose to cover you—providing the third-party credibility signal that AI engines are trained to recognize and cite. Syndication volume doesn't substitute for editorial authority.

How long does it take to build AI citation authority?

AI citation authority builds through consistent earned media placements over months, not days. Unlike wire releases (which index quickly but rarely generate lasting citation signals), editorial coverage in tier-one publications creates the cumulative presence that AI engines incorporate into their brand knowledge.

Related Reading


Sources

  1. Dai, S., Cao, Z., et al. "Media Source Matters More Than Content: Unveiling Political Bias in LLM-Generated Citations." Proceedings of EMNLP 2025, Association for Computational Linguistics. https://aclanthology.org/2025.emnlp-main.872.pdf

  2. Yang, K. "News Source Citing Patterns in AI Search Systems." arXiv, 2025. https://arxiv.org/html/2507.05301v1

  3. Minici, M., et al. "Auditing LLM Editorial Bias in News Media Exposure." arXiv, October 2025. https://arxiv.org/html/2510.27489v1

  4. Muck Rack. "What Is AI Reading? Generative Pulse Report." December 2025. https://generativepulse.ai/report/

  5. Muck Rack. "What Is AI Reading? Generative Pulse Report." July 2025. https://generativepulse.ai/report/

  6. Deck, Andrew. "Generative AI Models Love to Cite Reuters and Axios, Study Finds." Nieman Journalism Lab, Harvard University, July 2025. https://www.niemanlab.org/2025/07/generative-ai-models-love-to-cite-reuters-and-axios-study-finds/

  7. Muck Rack. "Earned Media Still Drives Generative AI Citations as Press Release Visibility Grows." GlobeNewswire, December 2, 2025. https://www.globenewswire.com/news-release/2025/12/02/3198248/0/en/Earned-Media-Still-Drives-Generative-AI-Citations-as-Press-Release-Visibility-Grows.html

  8. Nifong, Casey. "How Generative Engines Define and Rank Trustworthy Content." Search Engine Land, September 5, 2025. https://searchengineland.com/how-generative-engines-define-rank-trustworthy-content-461575