Your Blog Competes for 5% of AI Citations. Here's What Earns the Other 95%.
New research from FullIntel-UConn and MuckRack shows 95%+ of AI citations go to earned, unpaid coverage — not your owned content. Here's the 3-step audit operators are using to close the gap.
There's a specific moment marketing teams hit when they start tracking AI visibility: they pull up ChatGPT or Perplexity, ask a question their best prospect would ask, and see their competitor cited — not them.
The instinct is to produce more content. Better content. More optimized content.
It's the wrong instinct.
New research shows that the content type most B2B brands invest in — blog posts, landing pages, whitepapers on their own domain — is competing for roughly 5% of what AI engines actually cite. The other 95% goes somewhere else entirely.
Here's what the data says, and the three-step audit to fix it.
The Citation Split Nobody Told You About
A joint study from Fullintel and the University of Connecticut, presented at the International Public Relations Research Conference in March 2026, examined thousands of AI search outputs to identify what sources get cited. The results:
- 47% of all AI citations came from journalistic sources — third-party news outlets and sites using journalistic standards
- 48% from a combination of corporate, university, health network, and professional association websites
- Combined: AI engines are pulling almost exclusively from credible third-party sources
At the same time, MuckRack's "What Is AI Reading?" report — which analyzed over 1 million links cited by models including ChatGPT, Gemini, and Claude — found that more than 95% of AI citations come from non-paid, earned coverage. Recency compounds this: more than half of OpenAI's cited content was published within the previous 12 months.
Here's what this means in plain terms: if your AI visibility strategy is built around owned content — your blog, your resource hub, your landing pages — you are competing for a fraction of the citations these engines actually deliver.
The brands showing up in AI answers are getting placed in outlets AI trusts. Not just publishing in-house.
Why This Gap Exists (And Why It's Getting Wider)
AI models have a trust hierarchy. Editors, peer review, institutional authority — these are the signals that determine whether a source gets pulled into a generated response.
The Institute for Public Relations has documented that AI training heavily weights sources with the same criteria humans use to evaluate credibility: transparency, attribution, editorial rigor, and evidence-based reporting. A blog post on your company domain, even a well-researched one, competes poorly against a piece in a recognized outlet that follows those standards.
What makes this gap worse: most brands know AI visibility is a problem, but very few are diagnosing it correctly. A Gartner survey released February 23rd found that 65% of CMOs expect AI to dramatically disrupt their role within two years — yet only 32% believe significant changes to their approach are needed. That's the blind spot: leaders sense the problem but aren't taking the right corrective action.
Producing more owned content is not the corrective action.
Meanwhile, publishers are catching on. Outlets like the Financial Times are actively building offerings that help brands gain placement in the publications AI engines trust. The brands slow to act on this aren't just losing visibility — they're handing the advantage to competitors willing to pay for strategic placement in the right outlets.
The 3-Step Coverage Audit
This takes a couple of hours the first time. Run it once, update quarterly.
Step 1: Baseline Your Current Citation Profile
Pick your top five buyer queries — the questions your best prospect asks before evaluating vendors in your category. Run each one in ChatGPT, Perplexity, and Claude (with web search enabled where available). For every cited source in the responses, note:
- Domain type: Is it journalism? Corporate? Academic? Association?
- Outlet tier: Recognized publication vs. unknown blog
- Your presence: Are you cited, mentioned, or absent?
- Your competitors: Where are they showing up?
Do this for all five queries and build a simple table: query, AI engine, your citation status, competitor citation status, dominant outlet type in the response. This is your baseline.
Most teams skip this step and go straight to producing content. That's why they stay invisible.
Step 2: Map the Outlet Types AI Trusts in Your Category
The outlets that appear in AI citations are not random. They follow a pattern specific to your industry. Once you've run Step 1, you'll see it: there are typically 5–8 publications that appear consistently across responses in your category.
These are your target outlets. They're the publications AI engines have decided are credible sources for questions in your space. Getting placed in them moves the needle. Getting placed in outlets that don't appear in these results almost certainly won't.
Cross-reference this list with your existing coverage. If you've been placing content in outlets that never appear in AI responses for your category queries, you now know why it hasn't moved the needle on AI visibility.
This isn't about chasing any press mention. It's about matching your placement strategy to the outlets AI models have already decided to trust. That's what Machine Relations as a discipline is built around — understanding what signals AI engines respond to and systematically engineering your brand presence around them.
Step 3: Build a 90-Day Earned Placement Calendar
Now you have the map. The next step is systematic execution.
Take your list of 5–8 target outlets and build a 90-day calendar with these inputs:
Coverage angles that produce citations: Not product announcements. Data-led pieces, named expert perspective, and story angles with clear editorial value. AI models specifically favor recent content — the MuckRack data shows strong recency bias, so a placement from two weeks ago is worth more than one from 18 months ago.
Frequency targets: Aim for at least one new placement per target outlet per quarter, minimum. If you're starting from zero, the first 90 days is about establishing presence. The compounding happens after.
Measurement triggers: After each placement, run your baseline query set again. Track whether the new placement shows up in citations and how long it takes. This is your feedback loop. If a placement in a specific outlet produces a citation, double down on that outlet.
The fastest teams learn the outlet-to-citation conversion rate for their category within two cycles. Then it's a volume and consistency game.
The Failure Mode to Avoid
The most common mistake operators make at this stage: confusing media mentions with AI-credible citations.
A logo mention on a syndicated press release platform won't move your AI citation profile. An interview in a tier-3 blog — even one with decent traffic — probably won't either. The standard isn't "were you mentioned?" It's "were you mentioned in a publication that AI engines have already decided to trust?"
The filter is strict. Which is actually good news: the audit tells you exactly where to focus. If you know where your AI search gaps are, you can stop spreading budget thin and concentrate placements in the outlets that actually move the number.
Why This Week
The FullIntel-UConn study is being presented at a major communications research conference in March. Once that data is in front of PR and marketing leadership at scale, the awareness problem flips: everyone will be running for the same outlet placements. Earned media has always been competitive. AI visibility awareness is about to make it significantly more so.
Most of your competitors aren't running this audit yet. That window is measured in weeks, not months.
If you want to know exactly where your brand stands in AI responses before you spend another dollar on owned content, run the visibility audit here. It benchmarks your citation profile across the AI engines your buyers are actually using.
Related Reading
- AI Visibility for SaaS Companies: How to Get Cited by ChatGPT and Perplexity
- How to Get Forbes Coverage for Your SaaS Company in 2026
Sources: Fullintel-UConn AI Citations Study (Feb 2026) · MuckRack: What Is AI Reading? · Gartner CMO AI Blind Spot Survey (Feb 23, 2026) · Institute for Public Relations · Digiday: Publishers Explore AI Visibility Consulting (Feb 19, 2026)