Your Earned Media Is In the Right Places. AI Still Isn't Citing It. Here's the Diagnostic.
A March 2026 paper introduces the first taxonomy of citation failure modes — why earned media placements fail to get cited even after landing in the right publications. Targeted fixes to 5% of content produce 40% more citations. Here's the diagnostic sequence.
The brand is in TechCrunch. The announcement ran in Forbes. The comparison roundup published in a trade pub with a 90-plus domain authority. ChatGPT still doesn't mention you when a buyer asks about your category.
This is the problem that appears after you fix the off-site layer. The distribution is right. The publications are right. The model still skips the coverage.
A paper published to arXiv on March 10 explains the gap with more precision than anything published on this topic to date.
What current GEO optimization actually measures
Most GEO frameworks optimize for contribution — how much a document influenced an AI response. That's not the same as citation, which is what actually routes a buyer back to your brand. A document can shape an AI answer in meaningful ways while never appearing as a named source. You get no credit for the influence, and your buyers don't know you existed in the answer.
The paper also identifies a second problem with existing methods: they apply generic rewriting rules uniformly across content. Answer-first H2s. FAQ schema. Introductory paragraphs structured as direct responses. These interventions work for high-authority, well-distributed content. The researchers found they actively harm long-tail content — newer placements, smaller publications, less-established brands. Generic optimization applied to the wrong content degrades citation rates rather than improving them.
The citation failure taxonomy
The paper introduces the first formal taxonomy of citation failure modes spanning the full citation pipeline. The framework asks a different question than existing GEO methods: not "how do I optimize this content" but "why is this specific document failing to get cited."
That distinction matters because the failure mode determines the fix. A document that fails at the retrieval stage needs a different intervention than one that fails at the extraction stage. A generic rewrite changes 25% of the content without diagnosing which stage is blocked. The diagnostic approach identifies the failure point, applies a targeted repair, and modifies 5% of the content for a 40% relative improvement in citation rates.
That's the number that changes the math on how you allocate post-placement work: 40% more citations from changing one-twentieth of the content, while the brute-force method changes five times as much for less improvement.
What operators should run this week
Pull your ten highest-value earned media placements from the last six months. These are the pieces that should be generating AI citations — the category articles, the comparison features, the announcement coverage in publications with real domain authority.
For each placement, run three to five queries in Perplexity and ChatGPT that the piece should surface. Comparative queries. Category queries. Specific claims the article makes. Log which placements get cited and which don't.
For the ones that don't surface, the failure is somewhere in the citation pipeline — retrieval, extraction, selection, or attribution. Each stage has measurable signals.
GEO-16 research, which analyzed 1,702 citations across Brave Summary, Google AI Overviews, and Perplexity, found that metadata freshness, semantic HTML, and structured data were the signals most strongly associated with citation across all three engines. Pages meeting a minimum quality threshold across those signals achieved a 78% cross-engine citation rate. The failure modes are not obscure. Most fall into patterns detectable with a structured review.
For placements that aren't getting cited: check whether the key claims are written in a form the model can extract and attribute. Check whether the publication's crawler access is current. Check whether the article has structural problems — nested formatting without clear section headings, claims buried in long paragraphs without a lead sentence — that block extraction even when the content is accurate and well-placed.
This is a citation pipeline audit, not an SEO audit. The question is whether each placement is reaching the model in a form it can use as a source.
Where this fits in the Machine Relations stack
Getting placements into the right publications closes the distribution gap. Diagnosing why specific placements aren't generating citations closes the execution gap. Both audits are necessary. Neither substitutes for the other.
Machine Relations is the discipline that holds them together. The placement gets your brand into the publications AI engines cite. The citation audit tells you whether those placements are completing the circuit. Without the second step, the distribution investment is underperforming in ways that are hard to see from coverage metrics alone.
For operators: the citation failure audit on existing placements takes half a day. It tells you which pieces have diagnosable, fixable problems — and which ones are already working. That's the right starting point before you brief new coverage.
The visibility audit at AuthorityTech shows which of your placements are generating AI citations and which aren't, so you have the data for that audit without manual spot-checking across engines.