Schema Didn't Move AI Citations. Google Killed FAQ Rich Results. Here's What Actually Gets Cited in 2026.
Ahrefs tracked 1,885 pages adding schema markup. Zero citation uplift across Google AI Overviews, AI Mode, and ChatGPT. The same week, Google deprecated FAQ rich results. The technical optimization playbook for AI visibility was always wrong.
Two of the most-prescribed AI visibility tactics failed publicly in the same week. Ahrefs tracked 1,885 pages that added JSON-LD schema markup between August 2025 and March 2026, matched them against 4,000 control pages, and measured citation changes across Google AI Overviews, AI Mode, and ChatGPT. The result: zero meaningful citation uplift on any platform. Four days earlier, Google deprecated FAQ rich results entirely, effective May 7, 2026. The "add code, get cited" playbook is dead. What AI engines actually cite is earned authority, and that has always been the real lever.
What Ahrefs actually found: 1,885 pages, no uplift
The numbers are clear. Across a matched difference-in-differences analysis:
| AI Platform | Citation Change | Verdict |
|---|---|---|
| Google AI Overviews | -4.6% | Small decline, statistically significant |
| Google AI Mode | +2.4% | Indistinguishable from zero |
| ChatGPT | +2.2% | Indistinguishable from zero |
Ahrefs ran four separate statistical tests. All four told the same story. Adding schema markup did not increase AI citations on any platform. The AI Overviews decline was real but small — roughly 12 fewer daily citations per page in a sample where most pages received hundreds.
The industry has been pointing to a correlation as proof: AI-cited pages are nearly three times more likely to carry JSON-LD than non-cited pages, based on Ahrefs' analysis of 6 million URLs. But correlation is not causation. Those sites are better maintained, more authoritative, and produce stronger content. Schema markup is a byproduct of quality, not a driver of citations.
AI engines ignore schema during retrieval
This is the part most GEO consultants skip. A separate experiment from searchVIU tested whether five major AI systems — ChatGPT, Claude, Perplexity, Gemini, and Google AI Mode — actually read schema markup when fetching pages in real time. Every system extracted only visible HTML content. JSON-LD, hidden Microdata, and hidden RDFa were all ignored.
The markup was invisible to the systems people were optimizing for. That is not a marginal finding. That is a structural one. If AI retrieval systems cannot see the data, adding it cannot change their citation behavior.
Then Google killed FAQ rich results
On May 7, 2026, Google announced that FAQ rich results will no longer appear in Google Search. Search Console will stop reporting on FAQ structured data. The FAQ search appearance, the rich result report, and support in the Rich Results Test will all be removed by June 2026. The Search Console API loses FAQ support in August 2026.
FAQ structured data was the single most recommended AEO tactic for years. Entire consulting practices were built around it. Both events in the same week tell the same story. Technical markup as a standalone AI visibility lever is losing signal value, not gaining it.
What AI engines actually cite
If schema does not drive citations and FAQ rich results are gone, what does drive them?
Earned authority. Third-party trust signals. Content that AI systems can verify from independent sources.
A Trustpilot analysis of more than 800,000 AI responses across ChatGPT, Gemini, Perplexity, and Google AI Mode found that brands with no active review profile were cited in only 1% of answers. Brands that actively collected and responded to feedback were cited in 75.3% of answers. Review and trust sites now account for 14% of all AI citations in that sample, second only to general brand websites.
Research from the GEO-16 framework applied to B2B SaaS found that cross-engine citations — pages cited by multiple AI platforms — exhibit 71% higher quality scores than single-engine citations. The pages earning citations across ChatGPT, Perplexity, and Google simultaneously are not the ones with better schema. They are the ones with stronger earned authority, clearer entity attribution, and content structured for extraction.
A measurement framework for generative engine optimization distinguishes between citation selection (getting linked) and citation absorption (having your content integrated into the AI answer itself). Schema markup affects neither mechanism. What drives both is whether the source has been independently verified, frequently referenced, and structured so the AI engine can extract a clean, attributable claim.
The operator shift
The entire "optimize your way to AI visibility" approach was built on a misread of the evidence. Pages with schema get cited more — not because of the schema, but because those pages belong to authoritative sites doing everything else right.
The real playbook is simpler and harder:
- Get covered in publications AI engines trust. Earned media placements in high-authority outlets create the third-party corroboration signal AI systems use to select sources.
- Build entity clarity. AI engines need to resolve who you are, what you do, and why your claims are credible. That is not a code problem. That is a citation architecture problem.
- Make your content extractable. Answer-first structure, specific claims with named sources, comparison tables, direct answers to the questions buyers actually ask. This is the Machine Relations stack in practice: authority feeds entity clarity, entity clarity feeds AI visibility, visibility feeds citation, citation compounds.
I have been saying this since I coined Machine Relations: you cannot optimize your way into an AI engine's trust. You have to earn it. This week, the data caught up.
If you want to know where your brand actually stands in AI engines right now, run a visibility audit. The gap between what you think AI says about you and what it actually says is usually the first thing that needs fixing.
FAQ
Does schema markup help with AI citations? No. Ahrefs tracked 1,885 pages adding JSON-LD schema and found no meaningful citation uplift across Google AI Overviews, AI Mode, or ChatGPT. AI retrieval systems extract only visible HTML content during real-time page fetching, meaning hidden structured data is not read at the point of citation selection.
Why did Google deprecate FAQ rich results? Google announced on May 7, 2026, that FAQ rich results will no longer appear in Google Search. The feature is being removed from Search Console, the Rich Results Test, and the API by August 2026. Google has been steadily reducing the role of structured data as a direct visibility lever.
What actually drives AI citations in 2026? Earned authority from third-party sources, content quality and extractability, entity clarity, and cross-platform trust signals. Trustpilot's analysis of 800,000+ AI responses found a 75x citation gap between brands with active trust profiles and those without.
Who coined Machine Relations? Jaxon Parrott, founder of AuthorityTech, coined Machine Relations in 2024 to describe the discipline of earning AI citations and recommendations through earned media authority rather than technical optimization alone.
How is Machine Relations different from GEO? GEO focuses on optimizing content for generative AI engines. Machine Relations is the broader discipline: authority, entity resolution, citation architecture, distribution, and measurement. GEO is one layer within the MR Stack.