Afternoon BriefAI Search & Discovery

The Market Leader Ran an AI Search Test. A Brand No One Had Heard of Came Up First.

The category leader with the biggest SEO budget wasn't in the AI answer. A local competitor was. MIT Sloan and McKinsey explain why market position doesn't protect you — and the three-step audit to check your gap.

Christian Lehman|
The Market Leader Ran an AI Search Test. A Brand No One Had Heard of Came Up First.

Key Takeaways

  • Kate Klein, EVP of marketing at Houston Fitness Partners (a major Planet Fitness franchisee): "We were shocked when a small, local company in Houston was landing better in AI searches" — MIT Sloan Management Review, Jan 2026
  • A financial services exec watched a consumer search for their category on ChatGPT. The category leader — with the largest market share and biggest digital marketing budget in the industry — wasn't cited. A smaller competitor was.
  • McKinsey: a brand's own website accounts for just 5–10% of the sources AI search draws from. Traditional SEO investment optimizes for the fraction AI mostly ignores.

Kate Klein runs marketing for one of the largest Planet Fitness franchisee groups in the US. Last year, her team ran a simple test: they pulled up ChatGPT and Perplexity and searched for their fitness category.

"That was a wake-up call," she told MIT Sloan Management Review. "We were shocked when a small, local company in Houston was landing better in AI searches."

Her company had one of the larger search investments in the category. The AI didn't reflect that.

She's not alone.

The same MIT Sloan research documented a financial services executive watching a consumer search for category information live. The consumer opened ChatGPT — not Google. Asked a standard category question. The executive's firm, the one with the largest market share and the most money spent on traditional media, digital marketing, and SEO in the industry, was not in the response. A much smaller competitor was.

This is not a glitch. It's how AI search actually works.

By the numbers

  • 37% of domains cited by AI engines don't appear in any traditional search results — Zhang et al., arXiv:2512.09483
  • 88% of Google AI Mode citations don't appear in the SERP top 10 — Moz, 2026 (40,000 queries)
  • 85%+ of non-paid AI citations originate from earned media, not brand-owned content — Muck Rack
  • 65% of ChatGPT's top-cited pages come from DR80+ domains — Ahrefs (1,000 citation analysis)
  • 60% of Google searches already end without a click — SparkToro, 2024
  • $750 billion in US consumer spend projected to flow through AI-powered search by 2028 — McKinsey
  • 44% of AI search users say it's already their primary research source, ahead of traditional search (31%), brand websites (9%), and review sites (6%) — McKinsey

Why market position doesn't transfer

Traditional SEO rewards investment in owned content and links. Build the biggest, most-optimized site and authority compounds over time. That model is well understood, expensive to replicate, and is a significant reason why category leaders stay category leaders.

AI search has a different input set.

McKinsey's AI discovery research found that a brand's own website accounts for roughly 5–10% of the sources AI search draws from when answering a query. The rest comes from third-party editorial coverage, trade press, forums, review platforms, institutional sources, and user-generated content.

The brands that invested the most in the traditional model built presence in the 5–10% slice. The smaller companies beating them in AI search built presence in the other 90.

The math gets harder from here. Brands without AI-appropriate presence are looking at 20–50% declines in traffic from traditional search channels. Two-thirds of Gen Zers and more than half of Millennials now use LLMs to research products before buying, according to Harvard Business Review citing Pernod Ricard's 2026 analysis. If your brand doesn't appear in those answers, you're not in consideration.

Category leaders with the most to lose are often the hardest organizations to move. They built their position on the old model. Shifting budget away from what worked feels like a risk. But staying put is the riskier move.

The three-step audit

Before changing anything, run this. Most teams skip directly to production. That's why they stay invisible.

Step 1: Run your top five buyer queries in ChatGPT, Perplexity, and Google AI Mode.

Not product queries. Research questions your buyer asks before they've made a category decision. Something like: "What's the best [category] solution for [use case]?" or "How do [company type] teams typically handle [specific problem]?"

For each response, record: does your brand appear? Does a specific competitor appear? What types of sources — news outlets, trade press, forums, academic — is the AI drawing from?

Step 2: Map the source types, not just whether you showed up.

Look at the 5–10 domains appearing most often across responses. What are they? Industry trade publications? Analyst reports? Specific news outlets? Review platforms?

Those are the publications AI has decided are credible for your category. The question isn't whether you should publish more content. It's whether you appear in those specific outlets at all.

Cross-reference your existing coverage. If your earned media work over the last 12 months never touched the outlets appearing in your category's AI responses, that's the gap — and now you can see exactly where it is.

Step 3: Score the competitive position.

For each query: does your brand appear? Does your top competitor appear?

  • Both appear → parity
  • They appear, you don't → active gap
  • Neither appears → first-mover window

Build a table: query, AI engine, your citation status, competitor status, dominant source type. Fifteen rows total. That's your baseline.

Most marketing teams don't have this. Which is why most marketing teams are guessing about what to fix.

What to do with the results

The wrong response to this audit is to produce more owned content.

The right response is a placement calendar targeting the 3–5 outlets showing up most frequently in your category's AI citations. Earned coverage in those outlets — bylined pieces, data contributions, expert commentary that gets cited — is what builds the source infrastructure your AI visibility runs on.

The feedback loop moves faster than most teams expect. Research from Princeton and Georgia Tech found that adding verified statistics cited from credible external sources improves AI visibility by 30–40% — Aggarwal et al., SIGKDD 2024. A placement in an AI-credible outlet typically shows citation impact within 4–6 weeks. Run the audit first, then re-run your baseline query set after each new placement.

McKinsey found that even in major consumer categories — credit cards, hotels, electronics — top brands were absent from AI search responses despite their market position. The brands showing up weren't the biggest. They were the ones with presence distributed across the third-party sources AI engines trust.

That's the core work of Machine Relations: not producing content for its own sake, but building your brand's presence across the specific source types AI has already decided to trust. Market position is not a machine signal. Distributed editorial presence is.

Run the audit. The gap is probably larger than you expect, and smaller than it will be in six months.

Sources: MIT Sloan Management Review, "Can Customers Find Your Brand? Marketing Strategies for AI-Driven Search" (Jan 28, 2026) · McKinsey, "New Front Door to the Internet: Winning in the Age of AI Search" (Oct 2025) · Zhang et al., arXiv:2512.09483 · Moz AI Mode Citation Study, 2026 · Muck Rack AI Citation Analysis · Ahrefs ChatGPT Citation Study · SparkToro Zero-Click Study, 2024 · Aggarwal et al., SIGKDD 2024

Related Reading