AI Already Defined Your Brand. You Weren't Consulted.
HBR's March 2026 cover documents what happens when AI models define brands without input from those brands. Most companies are optimizing for AI visibility. Almost none are building the thing that actually determines what AI says about them.
In 2024, Pernod Ricard's head of digital and design learned that two-thirds of Gen Zers and more than half of Millennials had started using LLMs to research products before buying them. He decided to find out what those models were actually saying about his brands. What he found unsettled him. One popular AI model had miscategorized Ballantine's Scotch whiskey, a mass-market product, as a prestige offering. Harvard Business Review documented this in its March–April 2026 issue, framing it as a brand management problem without an obvious fix.
He is not alone. He is just one of the first executives to actually look.
The AI models now doing the first cut of research for millions of buying decisions were never briefed by your marketing team. They built their picture of your brand from what they found in publications they trust — and they did it without asking anyone. For a growing number of companies, the most consequential description of their brand in the last two years wasn't written by a PR agency or a copywriter. It was generated by an AI that read what it could find and drew conclusions from that.
The scale of this shift is moving faster than most brand teams have absorbed.
McKinsey published data in February documenting the transition from analytical AI — "show me the data" — to agentic AI that "does it for me". In procurement, that means AI agents sourcing, evaluating, and shortlisting vendors before a human enters the room. The shortlist is written before anyone picks up the phone.
On the consumer side, HBR's reporting on LLM research behavior among Gen Z and Millennials points to the same pattern: AI is where brand consideration now starts for a meaningful share of buyers. Not search, not social. A prompt to a language model. And LLMs, unlike search engines, don't return ten results for users to evaluate themselves. They give an answer. Your brand either shows up in that answer or it doesn't.
The buyers who looked up your category this morning and asked ChatGPT or Perplexity who the credible options were — those buyers got a response. It was not based on your ad spend. It was not based on your website. It was based on what the model had read from sources it treats as authoritative. You were either in that answer or you weren't, and the decision was already made.
The stakes of that selection are not abstract.
A 13-month analysis of LLM referral traffic published by Search Engine Land last week found that AI-referred visitors convert at approximately 18% — the highest conversion rate of any channel, including paid shopping, SEO, and PPC. Separate analysis from Visibility Labs across 94 e-commerce brands showed ChatGPT visits grew 1,079% from January to December 2025, converting 31% higher than non-branded organic search. On Airbnb's Q4 2025 earnings call, CEO Brian Chesky stated directly that chatbot traffic converts at a higher rate than traffic from Google.
The traffic that comes from AI answers is among the most valuable traffic on the internet. High intent, pre-qualified by the model before the click.
Most brands are getting almost none of it.
The conventional response to this problem has been technical. Structured data. Answer-engine-optimized FAQs. Schema markup. Content formatted for AI digestion. All of it aimed at making brand-owned content easier for AI to read.
The problem with this approach is that readability is not what determines what AI says about your brand. Authority does. And for AI engines, authority is determined by the same signal that has determined editorial credibility for decades: what trusted third-party publications have written about you.
Our analysis across 200+ client programs finds that 82–89% of AI-generated answers cite third-party earned media over brand-owned content — publications like Forbes, Reuters, the Financial Times, and TechCrunch. This is consistent with broader citation data across the publications AI engines weight most heavily. The AI engines are reading the same sources that shaped brand perception with human readers for years. A brand that has built consistent editorial presence in those outlets will appear in AI answers about its category. A brand that has not will appear in neither.
Pernod Ricard's AI model miscategorized Ballantine's because it had insufficient high-authority editorial information to represent the brand accurately. The fix for that is not a Schema tag. It's a coverage strategy — specifically, getting the publications that AI engines treat as authoritative to cover your brand with enough frequency and accuracy that the model has something real to learn from.
This is the thing most brands are not building. They're optimizing the output layer. The input layer — what goes into the publications that determine AI citations — is being ignored.
This is what Machine Relations names as the mechanism behind AI-era brand authority. PR got the core idea right: earned media placements in respected publications are the most powerful trust signal that exists. That was true when your buyers were human. It's true now that AI systems are doing the first cut of research on their behalf.
What changed is the reader. The publications haven't changed. The AI engines index Forbes and Reuters and the Financial Times because those sources have been authoritative for decades. The brands that appear accurately — and favorably — in AI answers are the ones whose earned media footprint gave the models something accurate to learn from. Brands that relied on owned content, or skipped earned media altogether, are being described by AI systems that didn't have much to work with.
The companies that will own their AI description in 2026 are not the ones running GEO audits on their homepages. They're the ones that built the editorial record before the machines started reading it — and the ones building it now, while most of the market is still optimizing the wrong layer.
Related Reading
- AI Visibility for EdTech Companies: The 2026 Earned Media Playbook
- Fintech PR Strategy 2026: Building Earned Authority Without Compliance Risk
If you want to understand where your brand currently stands in AI answers — which models cite you, which publications they draw from, and where competitors are appearing instead of you — the visibility audit maps your current AI footprint. That's the starting point.