HBR Says LLMs Are Overtaking Search. The Playbook They Didn't Include.
HBR confirmed the shift last week. What they didn't include: which earned media formats drive AI citations, why topic clustering beats single placements, and what to measure instead of LLM referral traffic.
HBR published a piece last week titled "LLMs Are Overtaking Search. Here's How to Adjust Your Online Presence." It's already circulating in marketing teams that pay attention. Good — this needed to reach a mainstream audience.
The problem: it confirms the shift. It doesn't tell operators what to do about it.
Here's the tactical layer they left out.
The numbers your team should know
McKinsey research from last fall found that 44% of AI-powered search users now name it their primary source for buying decisions — ahead of traditional search (31%), brand websites (9%), and review sites (6%). By 2028, McKinsey projects $750 billion in US revenue will funnel through AI-powered search.
Gartner's January 2026 survey adds to this: 51% of consumers say their research habits changed because of generative AI, with 71% of that group now using more specific, question-based queries. "Which [product] is best for [use case]" has replaced keyword search for most discovery behavior.
On the B2B side, Forrester's January 2026 research found that 94% of B2B buyers now use AI somewhere in their buying process — and twice as many named AI search as their most meaningful source compared to a year prior. Forrester's read on the implications is direct: "The marketing model that has worked in the past — driving traffic to your site to retarget and nurture prospects — will be much less effective."
Nearly half of all buyers and the vast majority of B2B buyers are forming opinions before they touch your website, your demo, or your sales team. What AI says about your brand in those moments shapes who makes the short list. Most marketing teams have no clear picture of what that looks like.
Zero-click is already happening
The no-click problem is real. But it's not symmetric.
TechCrunch reported in July 2025 that no-click news searches grew from 56% to 69% after Google launched AI Overviews. ChatGPT news-related prompts grew 212% from January 2024 through May 2025. Most of those interactions end without a click — but the brand named in the zero-click response won the consideration before any alternative was evaluated.
Brands absent from those answers aren't losing clicks. They're losing consideration entirely. That's not a traffic problem. It's a pipeline problem.
McKinsey's research found that unprepared brands could see traditional search traffic decline 20 to 50%. The brands that absorb that decline without losing pipeline are the ones building citation presence in the channels their buyers consult before ever reaching an owned property.
Why format matters more than volume
Not all earned media drives AI citations. The format matters more than the total coverage count.
AI engines pull from comparison content, roundup pieces, and "best of" lists for category and discovery queries — these are the formats structured to answer the questions buyers actually ask. When someone prompts ChatGPT with "what's the best [product] for [use case]," the AI pulls from content that directly answers that structure.
A brand profile in a trade publication doesn't answer that question. A named mention in a "top 10 tools for X" roundup does.
Four moves to close the gap:
-
Run the citation audit before you pitch anything. Search your category questions in ChatGPT, Perplexity, and Google AI Overview. Note which brands get named, which sources are cited, and what format those pieces are in. That map tells you which outlets carry citation weight in your specific category — and where you're absent.
-
Target the format when you pitch. When you're working toward placement, ask for the format explicitly. A mention in a comparison or roundup piece carries more citation weight than the same word count in a generic trend article. If your PR effort produces profiles and announcements only, you're building coverage AI engines largely ignore for discovery queries.
-
Build topic clusters, not one-off campaigns. One placement is a single data point for an AI engine. Three to five placements across different outlets on the same topic cluster start to make you the default answer for that query. The compounding happens when multiple credible sources confirm the same claim about your brand in the same context — that's what citation authority looks like structurally.
-
Measure citation share, not LLM referral traffic. LLM referral traffic is easy to track and almost always disappointing — most AI-answer interactions are zero-click. The right metric is how often your brand appears as a named answer for your target queries, and which placements drove that. Run the queries today. Run them again in 60 days after targeted placements. Movement in citation share tells you whether the strategy is working.
The failure mode most teams fall into
Treating AI visibility as a content problem.
Teams publish more, add schema markup, write FAQ pages optimized for AI questions. That work has marginal value. It doesn't solve the core issue.
AI engines don't recommend brands because those brands have well-structured websites. They recommend brands because authoritative third-party publications have confirmed, in context, that those brands are credible options for specific use cases. No amount of on-site optimization replicates that. The pattern is consistent in the citation audits we've covered here — brands with strong citation footprints got there through editorial presence, not technical fixes.
The fix is an earned media problem. Which makes it a relationship problem — who you know at the right outlets, whether you can get placed in the right formats, and whether you can sustain the velocity to build a real topic cluster rather than a single notable mention.
Related Reading
- PR Strategy for Series A and B Startups: The Earned Media Playbook for the AI Era
- AI Visibility for Media & Entertainment Companies: The 2026 Earned Media Playbook
Why this works at the infrastructure level, not just the campaign level
Earned media has always been the mechanism behind real credibility. A placement in a respected publication, secured through a real editorial relationship, was the strongest trust signal that existed when your buyers were reading those publications directly.
Those same publications are now the sources AI engines index and trust when generating answers. The reader changed. The mechanism didn't.
Machine Relations is the name for that connection — earned media in trusted publications driving AI citation the same way it drove human credibility for decades. The brands showing up in ChatGPT answers for category queries got there because credible publications placed them in context, repeatedly, across the right topic clusters, in the formats that get cited.
HBR confirmed the shift last week. The operators building citation infrastructure this quarter will be the ones AI engines default to in 12 months.
Run your visibility audit to see where you stand today.