The Journalists AI Cites Are Not the Ones Your PR Team Pitches. Here's the 2-Hour Audit.
MuckRack analyzed over 1 million AI citations and found only a 2% overlap between the journalists PR teams pitch most and the journalists AI engines actually cite for those brands. Here's the audit that realigns your pitch strategy with where AI visibility is built.
Key Takeaways
- MuckRack analyzed 1 million+ AI citations across ChatGPT, Gemini, Perplexity, and Claude — 82% are earned media, 94% are non-paid
- Only 2% overlap between the journalists PR teams pitch most and the journalists AI engines actually cite for those brands
- Peer-reviewed research found GEO content optimization has zero correlation with AI discovery rates — what predicts visibility is referring domains
- Press release citations from wire services grew 5x between July and December 2025 — but only for releases with 2x more stats and 2.5x more bullet points
Most B2B marketing teams check their AI citation rate, find it lacking, and immediately look at their website. They audit their structured data, clean up their schema, add FAQ sections, and rewrite intro paragraphs to answer questions more directly.
That's the wrong diagnostic.
MuckRack analyzed over 1 million links cited by web-enabled AI engines across ChatGPT, Gemini, Perplexity, and Claude between July and December 2025. What they found reframes where the investment should go: 82% of all AI citations are earned media. About 25% are journalistic. Non-paid media accounts for 94% of what AI cites.
Your website is doing less work than your PR program. And your PR program is probably targeting the wrong journalists.
The 2% problem
MuckRack compared the journalists their users pitched most frequently against the journalists AI engines actually cited for those brands. The overlap was 2%.
That number should land hard. If your PR strategy is built around audience size, editorial prestige by human standards, or category beat alignment as your PR firm defines it — you have a program optimized for an audience AI doesn't consult. The outlets where you're winning coverage are not necessarily the outlets AI engines have decided to trust.
This is a different problem than what most citation-failure diagnostics target. The failure-mode research published last week focused on why earned media placed in the right outlets still doesn't generate citations — extraction barriers, structural issues, retrieval problems. That's a pipeline problem. The 2% gap is upstream of the pipeline entirely: you're targeting journalists AI doesn't cite.
What GEO content optimization doesn't fix
The default response to low AI visibility is to optimize content. Peer-reviewed research published to arXiv in January 2026 tested whether that actually moves the needle.
The study ran 2,240 discovery queries across ChatGPT and Perplexity for 112 products. When asked about a product by name, AI recognized it almost perfectly — 99.4% recognition rate on ChatGPT. When asked category-level discovery questions like "What are the best [category] tools this year?" the discovery rate collapsed to 3.32% on ChatGPT. A 30-to-1 gap between recognition and recommendation.
The finding that disrupts most GEO playbooks: products with high GEO optimization scores were no more likely to be discovered than products with low scores. Zero correlation. What predicted Perplexity visibility wasn't content structure — it was referring domains (correlation r=0.319) and community presence.
AI doesn't discover brands because those brands optimized their heading hierarchy. It discovers brands that have built the web infrastructure AI trusts: referring domains, third-party editorial placement, and category-level presence in the outlets models have already decided are credible.
MIT Sloan Management Review documented the consequence for a major U.S. fitness brand — one with the largest market share in its category and the highest SEO investment among competitors. A buyer queried ChatGPT for top-rated options. The market leader wasn't recommended. A small, local competitor in Houston was. Not because Houston had better content structure. Because AI had encountered that competitor in the sources it trusts.
You cannot GEO your way to discovery. You earn your way to it.
The 2-hour audit
Step 1: Map the journalists AI is actually citing in your category (45 minutes)
Open ChatGPT, Perplexity, and Claude with web search enabled. Run the five questions your best prospects ask when evaluating vendors in your category — comparison queries, problem queries, category queries. For each response, log the outlets cited and the journalists bylined.
Do this across all five queries in all three engines. You're building a map of the 8–12 publications and 15–25 journalists AI treats as authoritative in your space. That map exists whether your PR team knows it or not.
Step 2: Compare against your current pitch list (30 minutes)
Pull your active pitch list. Count how many journalists appear in the map you just built.
If the overlap is under 20%, you are building coverage optimized for human readership that AI doesn't consult. That coverage has real value — event-driven awareness, late-stage buyer validation, recruitment. But if your earned media program is supposed to be building AI citation rate, the journalists you're targeting need to be the journalists AI cites. Right now, if MuckRack's data holds for your category, they're not.
Step 3: Rebuild your target list around AI citation behavior (45 minutes)
Replace or supplement your pitch list with the journalists and outlets from Step 1. For each, note what they cover, their recent cadence, and whether they've cited your competitors.
Then build three to five pitch angles for these specific journalists. The content that earns placement in AI-trusted outlets tends to share specific characteristics: data-forward, named expert attribution, specific claim construction rather than narrative framing. If your pitch deck is built around product capabilities, it needs to be rebuilt around the questions these journalists are actually covering.
The press release format that's working now
MuckRack's data shows one additional shift worth building into operations: citations from wire-distributed press releases grew 5x between July and December 2025. That growth is concentrated in ChatGPT and Gemini, driven by format. The releases AI cites most often share a pattern — twice as many statistics as typical releases, 2.5x more bullet points, 30% more action verbs, and significantly more objective sentence construction.
If your releases are written as promotional summaries, they're not generating citations. If they're written as factual, data-forward documents structured for extraction, they increasingly are. This is a format change you can make to the next release your team sends.
What this is actually called
The 2% journalist overlap is Machine Relations with the problem visible. The discipline is not about optimizing your website for AI engines. It's about understanding the authority structure AI has already built and engineering your presence into it — specifically, into the outlets and journalists it trusts.
Your PR program may be delivering strong results by traditional metrics. If it isn't targeting the journalists AI consults, it is doing something useful and something different from AI visibility. Those aren't the same program, and they shouldn't share the same targeting logic.
Run the audit. Two hours. You'll know exactly where the gap is.
For a structured map of where your brand currently appears across AI engines by outlet and query type, the visibility audit benchmarks your citation profile so you're not guessing which journalists to prioritize first.
Sources: MuckRack Generative Pulse, December 2025 · arXiv:2601.00912 — The Discovery Gap (January 2026) · MIT Sloan Management Review — Can Customers Find Your Brand?