Afternoon BriefAI Search & Discovery

The Brands Spending the Most on Search Are Invisible in AI — Here's the 3-Question Audit

A Planet Fitness franchisee EVP and a financial services exec both found out the same way: AI doesn't care about your search budget. MIT Sloan's Information Search Marketing framework gives operators a 3-question diagnostic to find out where they actually stand.

Christian Lehman|
The Brands Spending the Most on Search Are Invisible in AI — Here's the 3-Question Audit

Two executives found out the same week that their search investment had become a liability.

Kate Klein, EVP of marketing at Houston Fitness Partners — a major Planet Fitness franchisee — ran a test: pull up ChatGPT and search for their category. "That was a wake-up call," she told MIT Sloan Management Review. "We were shocked when a small, local company in Houston was landing better in AI searches." A financial services executive had the same experience watching a consumer live: pulled ChatGPT instead of Google, searched for top-rated options, and the exec's firm — larger market share, bigger marketing budget, more SEO spend than everyone else in the space — wasn't in the answer.

Both companies had done everything right by the old model. That's the problem.

MIT Sloan's January 2026 research on what they're calling the Information Search Marketing (ISM) framework puts a name on what these executives were observing: AI platforms have upended the correlation between marketing investment and discovery. The brands that show up are not always the brands that spent the most. They're the brands that AI engines can find, parse, and confidently reference.

The gap between those two things is where most B2B marketing budgets are silently disappearing.

Why your search budget doesn't transfer to AI visibility

SEO works through signals AI engines mostly ignore. Backlinks, keyword density, page authority, historical click patterns — these were the inputs that determined which 10 blue links a human might click. AI engines have a different job. They're synthesizing an answer, not returning a list. The inputs that matter are different.

According to Ahrefs' citation analysis of 1,000 ChatGPT responses, 65.3% of pages cited come from domains with a DR of 80 or higher — earned authority from trusted third-party publications, not brand-owned content. Moz's 2026 AI Mode study across 40,000 queries found that 88% of AI Mode citations were not in the organic top 10. The overlap between "ranks on Google" and "gets cited by AI" is 12%.

You can be on page one of Google and functionally invisible in AI search. The Planet Fitness franchisee was. The financial services firm was. Both had been competing on an old scoreboard while AI was keeping score somewhere else entirely.

This isn't a content quality problem. Both companies almost certainly had more content, more resources, and better SEO than the smaller competitors that beat them. It's an infrastructure problem: they hadn't built the type of visibility that AI engines use.

The 3-question ISM diagnostic

The Information Search Marketing framework breaks the problem into three questions operators can run right now. They don't require a tool subscription — just an honest hour with a laptop and the right prompts.

Question 1: Does AI know who you are in your category?

Open ChatGPT, Perplexity, and Google Gemini. Ask them: "What are the top [your category] companies?" and "Who are the most credible [your category] vendors?" Your goal isn't to rank #1. Your goal is to appear in the answer at all. If you're not mentioned, you have an entity clarity gap — AI engines can't confidently connect your brand to your category. No amount of SEO fixes that. What fixes it: third-party coverage in publications these engines treat as authoritative, specifically structured around your category claim.

Question 2: What do AI engines say about you when asked directly?

Search your brand name across at least two AI platforms. Ask: "What does [your company] do?" and "Why would someone choose [your company] over alternatives?" The answers are based on what AI engines have extracted from third-party sources that mention your brand. If the description is vague, wrong, or missing key differentiators, that's a citation architecture problem — the pieces covering your brand aren't structured in a way that lets AI engines extract and attribute clean claims. The fix isn't a press release. It's earned coverage in publications AI already cites, structured so the key claims are independently extractable.

Question 3: Where is your category being decided before buyers contact you?

According to Forrester's State of Business Buying 2024, 70% of B2B buyers complete most of their research before first contact with a vendor. That research is increasingly happening in AI platforms, not Google. The Princeton/Georgia Tech GEO study (Aggarwal et al., SIGKDD 2024) found that adding statistics and citing credible sources improves AI citation rates by 30–40%. If your category is being researched in ChatGPT and Perplexity, and your brand isn't appearing in those answers, you're not in the consideration set before the conversation starts.

Run all three questions. The answers tell you which of three gaps you're sitting in: entity clarity (AI doesn't know you), citation architecture (AI can't accurately describe you), or distribution (AI isn't pulling your category content at all).

The inversion most operators miss

The natural response to "we're invisible in AI search" is to treat it as an SEO problem — fix the website, add schema markup, optimize headings. That addresses citation architecture, which matters. But it doesn't address why a smaller competitor is showing up above a larger one with more budget.

The smaller company showing up in Houston wasn't beating Planet Fitness because of better schema markup. It was beating them because it had third-party coverage — earned media — from local and regional publications that AI engines could verify and cite. AI systems preferentially pull from sources they already recognize as credible, and brand-owned content doesn't clear that bar.

Muck Rack's analysis of 1 million+ AI prompts found that 85.5% of AI citations come from earned media sources. Not SEO content. Not brand websites. Coverage that humans at media outlets chose to write, in publications that AI engines have indexed as authoritative. The small Houston company didn't out-optimize Planet Fitness. It out-earned them on the coverage side.

This is why the ISM framework's diagnosis lands: the companies failing the AI visibility test aren't failing because they have bad content. They're failing because they built the wrong type of visibility. SEO visibility and AI citation visibility are different products. The second one requires earned authority — third-party coverage in publications AI already trusts — not just better on-page optimization.

Where Machine Relations fits

The mechanism behind all three ISM questions is what Machine Relations makes explicit: earned media placements in trusted publications are how AI engines decide what to surface, cite, and recommend. The publications haven't changed — Forbes, TechCrunch, Harvard Business Review, and the sector-specific outlets that have been authoritative in your space for years are still the inputs. What changed is the reader. AI engines read the same publications that shaped human brand perception for decades, and they're using that editorial authority to decide whose brand gets recommended when someone asks a question.

The ISM framework identifies the problem. Machine Relations identifies the mechanism. Earned authority in AI-trusted publications is the infrastructure that makes AI engines able to find you, describe you accurately, and include you in the answer.

If the 3-question audit above surfaced gaps, the AT Visibility Audit shows exactly where your brand stands across AI engines right now — and what type of coverage would close the gap.

Related Reading


Sources: MIT Sloan Management Review, "Can Customers Find Your Brand?" (Jan 2026) · Ahrefs ChatGPT citation analysis · Moz AI Mode study, 2026 · Forrester State of Business Buying 2024 · Princeton/Georgia Tech GEO paper, SIGKDD 2024 · Muck Rack Generative Pulse