Which Publications Get Cited Most by AI Search Engines in 2026
Analyzed across 11 industries and 1M+ citations: which media outlets ChatGPT, Perplexity, and Google AI Overviews actually cite — and what it means for B2B PR strategy in 2026.
Most B2B companies treat media coverage as a traffic problem. Get into Forbes, watch the referral sessions climb. Get onto TechCrunch, see the inbound spike. That mental model was accurate for a decade. It's now only half the story.
The other half: AI search engines are making their own editorial decisions about which publications to cite when your prospects ask questions about your category. The logic they use isn't identical to what drove human click-through. It's structured around domain authority patterns, publication trust signals, and which outlets the models were built to index in the first place. Getting that mapping wrong means investing PR budget in outlets that drive human traffic but produce no AI visibility. Getting it right means showing up in ChatGPT and Perplexity answers when a buyer is actively researching whether to shortlist you.
Two major research efforts — published between mid-2024 and late 2025 — now give enough signal to draw concrete conclusions. Here is what the data shows.
Key findings
- Forbes is the only traditional media outlet cited by AI search engines across all 11 major B2B and B2C sectors in the analysis — over 10,000 AI citation mentions across a single study.
- ChatGPT and Gemini heavily cite Reuters, the Financial Times, Forbes, Axios, and Time. Claude cites the same outlets far less — sometimes 50 times less frequently than ChatGPT for the same publication.
- Journalistic content accounts for 27% of AI citations overall, rising to 49% for queries that imply any time-sensitivity.
- AI systems heavily favor recent coverage. More than half of journalism citations come from articles published within the last 12 months. The highest citation rate for any piece of content occurs within 7 days of publication.
- Press release citations increased 5x between July and December 2025, now accounting for 1% of AI citations — still small, but the fastest-growing format in the Muck Rack dataset.
Forbes is the only media outlet that shows up everywhere
Search Engine Land analyzed citation data from more than 800 websites across 11 industries using Semrush's AI citation tracking. The October 2025 analysis identified four domains that appeared in the top cited sources in every single sector: Reddit, Wikipedia, YouTube, and Forbes.
Reddit's dominance reflects community-driven queries. Wikipedia's is unsurprising. YouTube covers video-native topics. Forbes is the only traditional editorial publication on that list — cited roughly 10,000 times across all 11 sectors. Business Insider appeared in most. LinkedIn showed up in 10 of 11 sectors.
The Finance sector makes the dynamic clearest. The Semrush data shows that "media brands such as Forbes and Business Insider dominate citations, reflecting the importance of timely commentary and market analysis." NerdWallet's presence in the same sector demonstrates a secondary pattern: niche specialists with deep evergreen guides can build AI citation authority in a single sector without the breadth of a Forbes. But cross-sector AI visibility, the kind that positions a B2B brand as credible to a general buyer, requires Forbes-level coverage to achieve it.
The implication: a Forbes placement isn't interchangeable with coverage in a smaller trade publication, even a well-regarded one. AI search systems have effectively made that decision already. Your PR strategy either reflects that hierarchy or it doesn't.
ChatGPT, Gemini, and Claude don't cite the same publications
The platforms aren't running the same algorithm. Muck Rack's Generative Pulse report, which analyzed over one million citations from major generative AI models, documented specific differences between how each model pulls from journalism. Nieman Lab covered the findings in July 2025.
For ChatGPT and Gemini, the top cited outlets included Reuters, the Financial Times, Time, Forbes, and Axios. The Financial Times, Time, and Axios all have content licensing agreements with OpenAI — Nieman Lab noted this in its coverage — though the citation patterns held for Gemini as well, which suggests the licensing relationship isn't the only variable.
Claude behaves differently. The same research found that Claude cites Reuters 20 times less often than Gemini and 50 times less often than ChatGPT. Claude's top-cited outlets skewed toward sources like Harvard Business Review and TechRadar — more evergreen, more analytical, less oriented toward breaking news.
An independent analysis published on arXiv in July 2025, examining over 366,000 citations from AI search responses across OpenAI, Perplexity, and Google, confirmed the pattern: "models from different providers cite distinct news sources" even while sharing general structural tendencies around citation concentration. The paper found that all three platforms drew heavily from a small number of outlets — but the specific outlets differed by platform.
For B2B brands, this creates a specific problem. If your AI visibility strategy is built around a single publication, you're probably winning one platform and losing the others. A Forbes placement gives you strong coverage in ChatGPT and Gemini responses. For the analyst-heavy, HBR-first audience that Claude tends to serve, the path looks different. Neither platform alone covers the full picture.
The recency premium changes how you think about PR campaigns
The Muck Rack Generative Pulse data contains a finding that most PR teams haven't operationalized yet: AI systems don't treat all coverage equally by age. Fast Company reported that more than 95% of AI citations come from non-paid coverage, with 89% sourced from earned media. More specifically, the Muck Rack data shows that over half of all journalism citations observed were from articles published in the last 12 months, and the highest citation rate for any article occurs within 7 days of publication.
For ChatGPT specifically, 56% of its journalism citations referenced pieces published within the past year. That means a placement in Forbes from three years ago is doing substantially less work than a placement from last month — at least in terms of AI search visibility.
This inverts the traditional logic that strong coverage compounds indefinitely. The evergreen content model still works for Google organic. It works less for AI search, which weights toward freshness. A brand that secures consistent earned media placement across credible outlets every month will outperform one that lands a single major feature annually, in terms of AI citation volume over time. That's a structural argument for PR programs built around sustained cadence, not sporadic big hits.
The query type determines which sources get cited
Not every query pulls from the same source pool. The Muck Rack research found a consistent split between two query types that has direct implications for B2B content strategy.
For queries implying time-sensitivity — anything phrased with words like "recent," "latest," "current," or with an implicit assumption of freshness — journalistic content accounts for 49% of citations. A separate Search Engine Land analysis of 8,000 AI citations found that news sites account for roughly 26% of all AI citations, while corporate blogs account for 39% — consistent with the Muck Rack split between query types. AI engines go to Reuters and Forbes for time-sensitive queries, not to a company's about page.
For subjective queries — advice-oriented questions like "how do I improve AI search visibility" or "what's the best PR strategy for B2B SaaS" — AI systems pull more heavily from corporate blogs and content. That's where owned media earns its keep in AI search: answering the "how do I" layer that journalism doesn't typically cover in operational depth.
This creates a useful division. Earned media placement handles the "who is credible in this space" queries, where brands get recommended or dismissed based on third-party coverage. Corporate content handles the "how do I solve this problem" queries, where depth and specificity matter more than brand authority. Both types of queries matter for B2B pipeline. Neither substitutes for the other, and conflating them produces bad outcomes in both directions.
Press releases are re-entering the citation stack
One data point from the Muck Rack Generative Pulse December 2025 report that has gone largely undiscussed: press release citations by AI search engines increased 5 times over between July and December 2025. They now account for 1% of all AI citations — a small number, but the fastest-growing format in the dataset.
The specific outlets driving this: PR Newswire, Business Wire, and GlobeNewswire are all seeing increased citation volume. This is unusual. Newswire citations were largely absent in earlier AI citation analysis. Their emergence suggests that AI systems with real-time indexing capabilities are increasingly pulling from high-frequency, verifiable factual sources when establishing a timeline, funding round, product launch, or executive appointment.
The implication isn't that press releases are becoming editorial content. They're not. But for brands that issue them for genuine business events — fundraising rounds, product launches, significant partnerships, executive hires — distributing via major newswires now produces a secondary benefit beyond journalists picking up the story. The newswire distribution itself may be cited directly.
What this means for B2B PR budget allocation in 2026
The data suggests a specific tiering for B2B PR strategy when AI search visibility is a stated objective. Most companies aren't currently budgeting against this hierarchy — they're treating all placements roughly equally. That's the allocation error the citation data makes visible.
The first tier — universal AI visibility — includes Forbes, Reuters, the Financial Times, and Axios. These appear across platforms and sectors. A placement in any of them contributes to AI citation authority in ChatGPT, Gemini, and most other major systems. They also have the highest editorial bar. That difficulty is precisely why AI systems trust them.
The second tier handles sector-specific authority: Business Insider for finance and technology, TechCrunch for tech, Harvard Business Review for management and strategy, TechRadar for enterprise tech. These underperform Forbes for universal citation but over-index for specific platforms — Claude draws heavily from HBR — and specific buyer segments. A fintech brand with strong FT and Business Insider coverage can still build meaningful AI visibility even without Forbes, particularly for Claude-driven research contexts.
The third tier is owned content: case studies, step-by-step guides, product breakdowns. This tier handles the instructional query layer that journalism doesn't cover. It won't get a brand recommended when a buyer asks ChatGPT who leads a category. But it creates the "how to" visibility that drives buyers from awareness to evaluation. It's a different job, and it requires a different content strategy.
The fourth and fastest-growing tier is real-time newswire distribution via PR Newswire, Business Wire, or GlobeNewswire for factual business events. It requires no editorial relationship and gets indexed fast. It doesn't substitute for journalism, but it fills a specific gap: factual events that need to enter the AI-indexed record quickly.
The mistake most B2B PR programs make is treating all four tiers as equivalent. A mid-tier trade publication placement and a Forbes placement look similar in a coverage report. They don't look similar in AI search citation data.
The mechanism behind the hierarchy
What the citation data is documenting is the extension of an older mechanism into a new medium. AI engines decide what to cite using the same signal that determines editorial credibility with human readers: earned media placements in publications the engines treat as authoritative. The publications haven't changed. Reuters and Forbes and the Financial Times were credible sources for B2B decision-making for decades before AI search existed. What changed is who's doing the reading.
When a prospect asks ChatGPT which companies lead your category, the answer is downstream of which companies have earned placements in the outlets ChatGPT was trained to trust. Not ad spend. Not optimized landing pages. Third-party editorial credibility — PR's original mechanism — is now also the mechanism that determines AI search visibility.
That's what Machine Relations describes as the defining shift in how brand authority gets built. PR convinced journalists to cover you. Machine Relations is what happens when you understand that the same publications those journalists write for are now also read by the AI systems your buyers use to research decisions. The mechanism is identical. The stakes for getting the publication tier right have doubled.
For B2B brands planning 2026 PR budgets, the question isn't just "will Forbes coverage drive traffic." It's "will it show up when my prospect asks ChatGPT who to call." Based on the current citation data, those are now the same question.