Afternoon BriefAI Search & Discovery

AI Search Has Two Lanes Now. Most Operators Are Only Working the One They Can See.

Harvard Business Review identified two distinct ways AI is disrupting search — and they require separate playbooks. Most marketing teams are running one strategy against both.

Christian Lehman|
AI Search Has Two Lanes Now. Most Operators Are Only Working the One They Can See.

Most marketing teams have one AI search strategy. You've spotted the trend — buyers querying ChatGPT instead of Google, organic traffic declining — and you've responded with a plan: better content structure, more schema markup, maybe some GEO tactics from a recent agency brief.

The problem isn't the plan. It's that you're solving one problem when there are two.

On March 6, Harvard Business Review published a piece by researchers Graham Kenny and Ganna Pogrebna on how LLMs are overtaking search. Their core finding: AI is reshaping online search in two distinct but overlapping ways. Both reduce friction for consumers. Both increase friction for businesses. But they operate through different mechanisms — and they respond to different fixes.

If your strategy doesn't address both, you're in good company. MIT Sloan documented what that gap looks like in January: a major U.S. fitness brand ran test queries through AI platforms and got beaten by a small local gym in Houston. "We were shocked when a small, local company in Houston was landing better in AI searches," said Kate Klein, executive vice president of marketing for Houston Fitness Partners, a major Planet Fitness franchisee. A financial services executive watched a prospect pull up ChatGPT instead of Google to find options in their category — and the executive's own company, the market share leader, wasn't in the answer. A smaller competitor was. Market leadership doesn't transfer to AI search. Neither does SEO budget. The reason is structural.


The two lanes

Lane 1: AI summaries within search platforms. Google AI Overviews, Google AI Mode, Bing Copilot. A user searches on one of these platforms and an AI-generated summary appears above the organic results. Even if your page ranks in the top five, the click often doesn't happen — the summary already answered the question.

This lane still runs through search. Rankings still matter. But ranking alone doesn't get you into the summary. The AI parser needs to find your page, read it, and decide your content belongs in the answer. That decision is driven by structure and content clarity, not domain authority alone.

Lane 2: Direct AI platform queries. ChatGPT. Perplexity. Gemini in standalone mode. A significant and growing portion of buyers — especially in B2B — are skipping search entirely and querying these platforms directly. Adobe tracked a 4,700% increase in AI-driven traffic to U.S. retail sites in July 2025 alone. The behavior is already in the data.

In Lane 2, your Google ranking is irrelevant. These models produce answers based on what they already know — built during training on web content and continuously updated through the sources they index. The question isn't whether you rank. It's whether you appeared in the sources those models trained on, and whether you keep showing up in the publications they pull from during live retrieval.


The playbook for Lane 1

Lane 1 is where content structure earns its keep. Ranking is still the entry requirement, but structure determines whether you get included in the summary.

State the answer to the query in the first paragraph of your page. AI Overviews pull content from the opening third of ranked pages at disproportionately high rates — get your answer up front, not buried after three paragraphs of context. If the query is a question, mirror that question explicitly in your content with a clean answer below it. Schema markup matters here: FAQ schema, HowTo schema, and Article schema signal to the AI parser what type of content it's reading and where the answer lives.

The tracking step is simple but most teams skip it: run your ten most important category queries in Google and check whether you appear in the AI Overview — not whether you rank, but whether you appear in the summary. Those are different outcomes with different diagnostic paths. You can learn more about what actually gets pages featured in Google AI Overviews and use it as your Lane 1 checklist.


The playbook for Lane 2

Lane 2 doesn't respond to any of the above.

You cannot structure your way into ChatGPT's answer. You cannot optimize your metadata to appear in Perplexity. The way direct AI query engines decide who to cite is fundamentally different from how search ranking works: they weight sources that have appeared in trusted, authoritative publications over time.

Cindy Machles described this accurately in a January 2026 Forbes piece: "Mentions in respected outlets, expert commentary, credible interviews and well-regarded reports aren't just reputation-builders anymore. They're the external validation that AI models use to double-check whether a brand deserves the spotlight."

Deloitte's CMO guidance frames it from the other direction: AI platforms have become powerful influencers of consumers, and brands should be thinking about how they appear across the sources AI engines trust — because that's where the path from brand awareness to AI recommendation actually runs.

The practical implication: to show up in Lane 2, your brand needs to be present in the sources those models were trained on and continue to pull from. Earned placements in Tier 1 publications — WSJ, HBR, Forbes, TechCrunch — and in the credible vertical outlets covering your category are what build that presence. Victoria Usher summarized the shift clearly in Forbes Business Council: AI-generated answers give "disproportionately positive credit to sources that are considered reliable" — which tends to mean respected publications, analyst reports, and widely cited research. Your owned website rarely qualifies.

Track Lane 2 presence by running your five most important buyer-intent prompts in ChatGPT and Perplexity: "best [category] tools for [use case]," "how do companies typically solve [problem] in [industry]." Note where you appear, where competitors appear, and which publications are being cited in the answers. Those cited publications are your target editorial placements — the ones driving AI recommendation in your category right now.


The failure mode

The mistake most teams make is treating Lane 2 like it's a harder version of Lane 1. Publishing more content. Improving technical SEO. Adding more case studies to the company blog. None of it addresses how Lane 2 actually works.

This is why brands with strong content operations are still invisible in ChatGPT. Owned content counts for very little in how direct AI queries are answered. External citation in trusted publications is what counts, and most marketing teams haven't built a systematic approach to generating it.

Related Reading


Why the mechanism matters

Lane 2 runs on Machine Relations — the discipline of ensuring AI engines cite your brand rather than missing it entirely. The mechanism is earned media: a placement in a publication that AI engines treat as authoritative creates a citation that compounds over time. Every time a buyer asks about your category, those placements sit in the retrieval pool. Your owned content often doesn't.

PR got the core of this right decades ago: third-party coverage in credible publications carries more weight than anything a brand says about itself. The mechanism hasn't changed. The reader has — it's now partly AI engines doing the first pass of buyer research before a human ever reaches your site.

The Visibility Audit at app.authoritytech.io/visibility-audit shows which publications in your category are actively driving AI citations right now, and where your brand isn't represented.

Two lanes. Two playbooks. Most teams are running one. That's the gap worth closing first.