Domain Authority Built Your Google Rankings. It Won't Build AI Citations.
New research shows only 7.2% of domains appear in both Google AI Overviews and LLMs. The sources ChatGPT and Claude actually cite look nothing like the backlink profiles operators have been building. Here's what changes.
Here is the pattern showing up in marketing teams right now: a company with a solid backlink profile, a domain authority score they're proud of, coverage in Forbes and TechCrunch, and no presence in ChatGPT answers for their category. They query the AI, they see competitors named. They can't figure out why.
The answer has nothing to do with their website. It has everything to do with where they've been building authority.
Google and ChatGPT are pulling from different libraries
A study published in Search Engine Land last October analyzed 8,090 keywords across 25 verticals and compared citation patterns between Google's AI Overviews and LLMs including GPT, Claude, and Gemini. The finding that should stop every content strategist mid-sprint: only 7.2% of domains appear in both systems.
Of the 22,410 unique domains cited across both systems:
- 70.7% appeared exclusively in Google AI Overviews
- 22.1% appeared exclusively in LLM foundation models
- 7.2% appeared in both
That is not a mild preference gap. That is two systems with almost entirely different source lists. The domain authority score your team tracks tells you almost nothing about your AI citation prospects, because Google and ChatGPT are not reading from the same library.
The implications depend on where your buyers actually search. According to SparkToro and Datos research published in January 2026, Google desktop searches per U.S. user fell nearly 20% year over year, and ChatGPT climbed to the seventh most-visited search destination in the country. For B2B buyers doing vendor research, the migration to AI tools isn't on the horizon, it already happened.
What the LLM-exclusive sources actually look like
The 22.1% of domains that appear only in LLMs tell you exactly what these systems value. The research describes them as:
- Investigative journalism from mainstream news publishers covering timely topics
- Niche vertical experts demonstrating deep subject matter expertise within a specific domain, Edmunds, Investopedia, Wired, All Recipes
- Educational platforms optimized for learning, GitHub, Coursera, Khan Academy
- Authoritative industry data portals: peer-reviewed journals, patents, standards bodies, court records
Notice what's not on that list. High-DA generalist sites with broad topic coverage. Thought leadership roundups. Brand-adjacent content syndication plays. The research puts it plainly: LLMs "prioritize publishers that provide topic depth over topic breadth, and educational value and conceptual clarity over traditional web authority signals."
The conclusion the Fractl team drew: "Your DA 90 site might be invisible to ChatGPT if it doesn't clearly and effectively explain concepts, rather than just ranking well with authority."
That's worth sitting with. A site that has spent years accumulating authority signals can still be functionally absent in AI-generated answers because the content doesn't do the work of actually teaching something.
What this means for a brand currently invisible in AI answers
The operational question isn't "how do I optimize my site for ChatGPT", your site isn't where the citation comes from. Search Engine Land's February 2026 analysis of AI SEO describes how AI engines build "entity mass" through third-party citations and corroboration. The mechanism is external. You can't engineer it from your own domain.
The question is: which specific publications in your vertical are niche experts that LLMs already trust? And do you have any meaningful presence in them?
Here's how to find out. Run it this week:
Start by mapping what ChatGPT actually cites in your category. Open ChatGPT or Perplexity. Ask the questions your buyers ask, not branded queries, the category questions. "What's the best approach to [problem you solve]?" "Which platforms are leading in [your category]?" For each answer, check the sources. Write down every publication cited more than once. You're building a short-list of 8–12 publications that already have LLM credibility in your space.
Then sort them by specificity. You'll find a mix: some mainstream (WSJ, TechCrunch), some niche vertical experts (an industry trade pub, a topic-specific news outlet, an educational resource). The niche vertical experts are the highest-leverage targets, specific enough that coverage there means you've become the expert answer on a defined topic, not just a name that appeared somewhere authoritative. The AT blog post on how Perplexity selects sources breaks down why topical authority in a defined domain outweighs raw authority across many topics.
Next, check whether you've been placed in those niche publications, not just named, but actually covered in context, with a link, in a piece that addresses a real question in your category. If the answer is no, you've found the gap.
Last, figure out what type of coverage actually earns placement. Niche expert publications don't run brand fluff. They run topic-depth pieces that make something clearer for their specific audience. The editorial standard is simple: does this content actually explain something, or does it just point to a company? The brands that get cited in AI answers earned it by creating, or funding coverage of, genuinely educational content on a specific topic, placed in publications that own that topic.
This is different from a press release. It's different from a Forbes contributor post. It requires knowing which publication covers your exact problem domain and building a relationship with the editorial team there.
The mistake most teams make at this point
Once teams understand this, the instinct is to replicate the niche expert on their own domain, create deep educational content at home, build topical authority there. It's a reasonable instinct. It won't solve the problem.
Pew Research published data in July 2025 showing that when a Google AI Overview appears, just 1% of users click the links it cites. Seer Interactive's September 2025 analysis put organic CTR at 0.6% when an AI Overview is present. That number tells you something: being cited by an AI system is not primarily a traffic mechanism. It's a trust mechanism. The AI is telling the person asking the question who the authoritative sources are.
That trust signal is external by definition. An AI citing your own website about your own product is not the same thing as an AI citing an independent trade publication's analysis of your category. The citation means something different to the buyer. The ones that drive pipeline are the ones that come through third-party sources that built credibility in that vertical independently.
On-domain content still matters, for how AI agents discover vendors, structured data and clear entity definitions do work. But entity strength in the AI search layer is built through the same mechanism that made PR valuable in the first place: earned coverage in publications your buyers already trust.
What this looks like as a practical strategy
Focus isn't ten publications. It's three. The research shows LLMs cite with depth not breadth, a brand that has significant coverage in three niche expert publications on a specific problem will consistently outperform a brand with thin mentions across twenty generalist sites.
Pick three publications that are niche experts in your problem domain, not just "high-authority" by traditional measures. Get real coverage there, by getting quoted as a source, by placing a contributed piece that actually teaches something, by generating original data someone can cite. Do it systematically over six to twelve months.
That is the execution path. It is slower than a backlink campaign. It produces something backlink campaigns never could: a presence in AI-generated answers that your buyers encounter when they're actively researching the problem you solve.
This is what Machine Relations describes as the new layer of PR for the AI era. The mechanism is earned media in trusted publications, the same mechanism that made PR valuable when the audience was human. The reader changed. The publications AI systems trust have been building credibility for years. The pathway in is through editorial relationships, not technical optimization.
The brands that are already appearing in ChatGPT answers in your category got there the same way. They earned placement in the specific publications your buyers trust, and those publications happen to be the ones AI systems trust too.
Related Reading
- AI Visibility for SaaS Companies: How to Get Cited by ChatGPT and Perplexity
- Machine Relations for Cybersecurity Companies: How Security Startups Build AI Engine Authority
If you want to see exactly where your brand shows up in AI answers right now, and which gaps you're actually sitting in, the visibility audit maps your current citation footprint against your category.