Afternoon BriefAI Search & Discovery

You're Auditing the Wrong Layer of Your AI Visibility Stack

Your team ran the GEO checklist — schema, structured data, answer-first content. Perplexity still cites your competitors. The data explains why, and what to fix instead.

Christian Lehman|
You're Auditing the Wrong Layer of Your AI Visibility Stack

Your team ran the GEO checklist. Answer-first H2s. FAQ schema. Article markup. You checked crawler access for GPTBot and PerplexityBot. You restructured the top five pages. You did exactly what every AI search optimization guide tells you to do.

Perplexity still cites your competitors. ChatGPT still doesn't mention you.

This is not a technical failure. You optimized the right way for the wrong layer.

The finding most GEO guides skip

A September 2025 paper published to arXiv introduced GEO-16, a 16-pillar auditing framework that analyzed page quality signals and their relationship to AI citation behavior across Brave Summary, Google AI Overviews, and Perplexity. The researchers found that on-page signals — structured data, semantic HTML, recency metadata — do matter for citation likelihood. But they included a qualifier that most practitioners ignore.

Even high-quality, well-structured pages are unlikely to be cited if they reside solely on vendor blogs. The study's conclusion: generative engines heavily weight earned media, meaning third-party authoritative domains, and largely exclude brand-owned and social content from AI answers. Social platforms are almost entirely absent.

This is consistent with what MIT Sloan researchers from UVA's Darden School and the University of Houston found when analyzing AI-driven search behavior in early 2026. They interviewed a CEO who called it "a wake-up call" after testing AI search responses and finding a smaller local competitor appearing ahead of their brand despite significantly more SEO investment and domain authority. The competitor's edge was not technical. It was editorial — they had coverage in publications the AI trusted.

The implication for operators: on-site optimization work is table stakes at best. The layer that actually moves AI citations is off your website entirely.

What AI engines actually pull from

Harvard Business Review published new analysis this week on how LLMs are reshaping online search. The central observation: AI-driven discovery reduces friction for consumers while increasing it for businesses — and brands that built visibility on traditional SEO assumptions are finding that both the audience and the reader changed.

Search Engine Land's analysis of generative AI summary behavior found that when an AI summary appears in a search result, users click traditional blue links only about 8% of the time. Ranking is mostly irrelevant at that point. What matters is whether your brand appears in the summary — and summaries are built from external sources, not your own pages.

A January 2026 analysis published via the Wall Street Journal made the connection explicit: "Tier 1 outlets still matter for two reasons. There's still plenty of people out there reading them, and what's on the first page of Google and what appears in the LLMs." The publications that shaped human brand perception for years are the same publications AI systems treat as authoritative sources. Your brand's presence in those outlets is citation infrastructure, not awareness spend.

The audit your team actually needs

Before you update another H-tag or add another schema block, run this check.

Test your highest-value queries in Perplexity. Pick 5–10 buying-intent queries — the questions a prospect types when evaluating vendors in your space. Run each one. Screenshot the citations. Note which brands get named, which sources Perplexity is pulling from, and whether any of those sources mention your brand. If competitors appear and you don't, on-site optimization won't close that gap. The gap is in citation presence — the publications Perplexity trusts for your category.

Identify the publications AI is pulling from. From the citations, a pattern will emerge quickly. Perplexity and ChatGPT consistently favor specific source types: comparison articles, recognized trade publications, independent research reports, and industry roundups. The arXiv study described this as a "dual strategy" requirement: on-page quality signals and strategic positioning on authoritative external domains. Without the second part, the first part doesn't produce consistent AI citation at scale.

Map your editorial gap. Cross-reference your coverage over the last 12 months against the publications surfacing in AI answers for your category. Treat this as a distribution audit. The question is not whether you got press. It's whether you appeared in the specific publications AI engines are reading for the queries your buyers type. A focused operator can complete this in under two hours without specialized tools.

If the gap is real (and for most B2B brands right now, it is), schema work will not close it. For a companion view of the on-page signals that do and don't correlate with citation behavior, the 3-Signal Citation Audit lays out the specifics.

The failure mode to avoid

Teams that discover they're missing from AI answers usually respond with two moves: add more schema markup and publish more content. Both are reasonable for on-site signals and neither is wrong. But neither addresses the off-site gap, which is the dominant citation driver.

The more specific mistake is treating earned media as a brand-building activity with its own budget, its own team, and its own timeline — separate from AI visibility work. That organizational separation made sense in the old model. In the current model, earned media in trusted publications is the direct input to AI citation. The difference between how ChatGPT and Perplexity drive pipeline comes down to citation mechanics, but both require the same underlying input: third-party editorial coverage in authoritative publications. Perplexity's inline citation model converts at 11x the rate of traditional organic search specifically because each Perplexity citation sends a qualified buyer who found your brand through the third-party sources Perplexity retrieved, not through your homepage.

Separating the earned media work from the AI visibility work is the organizational equivalent of running paid search on a domain that hasn't been indexed.

Why this is an infrastructure problem

The mechanism behind AI citation is not new. Earned media in trusted publications has always been the most durable trust signal in brand communications. What changed is the reader.

Where the reader was once a human scanning TechCrunch or Forbes to understand who the credible players in a category were, it's now also a language model making citation decisions when a prospect asks "who are the leading companies in X." The publications haven't changed. The indexing logic hasn't changed. What changed is the reader.

This is the core of Machine Relations — the discipline of ensuring your brand is cited by AI systems because of the same mechanism that made PR valuable in the first place: earned media placements in publications that AI engines already treat as authoritative sources. PR got that mechanism exactly right. Machine Relations is what happens when you keep the mechanism and rebuild the model around it.

For operators: if you're not in the publications AI is reading for your category, you're not in AI's answers. Schema markup does not change that.

Related Reading


Start by understanding where your brand actually shows up across AI engines right now. The visibility audit at AuthorityTech maps your citation presence so you're working from data rather than assumptions.