Afternoon BriefGEO / AEO

Publishing More Content Is Making Your Brand Harder to Find in AI Search. Here's the Fix.

The content volume model is structurally broken for AI visibility. Here's the 4-step audit that tells you exactly what to cut, what to keep, and where to redirect the budget to actually show up in AI answers.

Christian Lehman|
Publishing More Content Is Making Your Brand Harder to Find in AI Search. Here's the Fix.

The production math looks right on the surface: more content, more chances to rank. More posts, more pages, more coverage of every keyword in your category. For a decade this math held. Traffic compounded. Dashboards moved in the right direction.

It doesn't hold anymore.

AI systems now answer informational queries directly. A user asks ChatGPT what the best way to approach a content strategy is — and instead of getting a list of 10 links to click through, they get an answer. One answer. With citations from publications the model decided to trust. Your blog post, your comprehensive guide, your 2,000-word breakdown of the same topic — none of that is in the answer unless you earned a placement in a publication the AI already trusts.

A Search Engine Land analysis published this week put it plainly: informational SEO is over as a strategy. The channel that content volume was built to serve — organic search traffic — is structurally contracting. The channel that builds AI citation authority is different. And most content teams are still funding the wrong one.

This is the audit that tells you what you have, what AI would actually cite, and where to put the budget next quarter.


Why volume stopped working

Gartner's research team predicted in early 2024 that traditional search engine volume would drop 25% by 2026 as users moved to AI-powered answer engines. We're now in 2026, and the data is tracking toward that prediction. Organic CTR on queries featuring Google AI Overviews has already dropped 61% since mid-2024. Traffic that once flowed through informational content is being absorbed at the query level.

The structural issue is straightforward: the cost of creating content has dropped to near zero. Every company in your category can produce 40 posts a month now. When production is infinite and attention is fixed, being found becomes an economic problem, not a technical one. More content doesn't increase your odds of being found. It increases the noise your content has to cut through.

AI search doesn't use the same ranking model that made SEO volume work. LLMs evaluate brand mentions based on context, source authority, and co-occurrence between your brand and the topics you want to own. A brand mentioned in Forbes' fintech coverage carries different weight than the same brand mentioned in a sponsored post on a second-tier business blog. Volume is not the input. Source quality and editorial context are the inputs.

This is why your content calendar, as currently designed, may be producing nothing that compounds.


Step 1: Separate infrastructure content from citation content

Pull your last 12 months of published content. For each piece, ask one question: Is this content a buyer would check after they've already chosen a direction — or content a journalist, an AI model, or an industry publication would cite as a primary reference?

Infrastructure content is conversion support. Comparison pages, pricing breakdowns, product walkthroughs, case studies. This content serves buyers deep in a decision process. It should exist and be technically clean, but it doesn't build citation authority.

Citation content is referenceable. Original data. Canonical definitions. Proprietary frameworks. Research findings that journalists and analysts would quote when covering your category.

Most teams find the ratio is heavily weighted toward infrastructure — and that most of the infrastructure was built when organic traffic was the primary acquisition model. The ratio isn't the problem. The problem is that citation content hasn't been treated as a production category at all.


Step 2: Find what you already have that AI would cite

You probably have more citation-worthy material than you've published. Search Engine Land's analysis of nearly two million ChatGPT-referred sessions found that "answer capsules" — clean, scannable explanations of a specific topic in the first third of a page — had the strongest correlation with ChatGPT citations. Pages with original or branded data followed closely.

Start with what your company already knows:

  • Customer survey data that's been sitting in an internal deck
  • A methodology your team uses that the rest of your category doesn't have a name for
  • A finding from your product data that quantifies something buyers care about

The test for whether something qualifies: would a journalist covering your category quote this in an article? If yes, it's citation material. If no, it's probably infrastructure or filler.

The common failure at this stage is treating "comprehensive" content as citation content. A 5,000-word guide that covers everything your category covers is not a referenceable asset — it's a summary. Original data, canonical definitions, and specific named frameworks are referenceable assets. The comprehensive guide is the content that tries to rank for everything and gets cited for nothing.


Step 3: Find where AI is citing your category

Before you know where to place content, you need to know where your category is being referenced. Run 10 queries your ideal buyer would actually ask ChatGPT or Perplexity. Note every publication that appears in the citations.

These are your targets — not the publications you've historically pitched. AT's analysis across 11 industries and 1M+ citations found that Forbes is the only outlet cited by AI engines across all major B2B and B2C sectors. But for most B2B companies, vertical publications — outlets that have covered your specific category for years — drive disproportionate citation authority for category-specific queries. A placement in a niche trade publication your AI engine associates with your topic beats a brand mention in a general business outlet.

Map the publications you find by two dimensions: how often they appear in citations for your category's core queries, and whether your brand currently has any coverage there. The gap between those two lists is your targeting list.


Step 4: Redirect effort toward brand mention acquisition

The Search Engine Land brand mentions playbook is specific about what earns mentions at scale: original data that fills a gap in the information ecosystem, canonical definition pages, and active participation in communities where LLMs scrape discussions for training signal.

The outreach approach that works: lead with the asset, not the ask. "We published new data on X in our category — thought it might support your coverage of Y" is dramatically more effective than a cold pitch about your company's expertise. Pitch journalists who have covered that topic in the last 30 days. Not the ones who might cover it someday.

The reallocation question is concrete. Every hour currently spent producing informational content that answers questions AI already answers is an hour that could go toward one of four things:

  1. Publishing a proprietary data point from your customer or product base
  2. Building a canonical definition page for a term your category uses but hasn't defined clearly
  3. Pitching a named finding to a publication your target AI engine cites for category queries
  4. Documenting a methodology in a format that journalists and analysts can quote

One well-placed, citation-worthy asset in a trusted publication compounds differently than 20 posts answering questions a search AI already handles.


The frame that explains why this works

The mechanism underneath all of this is the same one that made PR valuable before AI existed: earned coverage in publications that third parties trust is the most durable signal of brand credibility.

Machine Relations is what happens when you recognize that the reader changed. The publications didn't change — Forbes, TechCrunch, Reuters, the vertical outlets that cover your category — these are the same sources that shaped buyer perception for decades. They're also the sources AI engines index, trust, and pull from when constructing answers. A brand that earned editorial coverage in those outlets before the AI search era arrived with a citation footprint already built. A brand that didn't is now competing to build one while AI answers are already live.

Earned media has always been the mechanism. The audit above is how you find where your current program is investing in the wrong version of it.

Related Reading


Your highest-leverage next step: run your top 10 category queries in ChatGPT and Perplexity, map every publication that appears in the citations, and check your coverage rate against that list. The gap you find is your targeting list. That's the whole brief.

Run your AI visibility audit →