Morning BriefAI Search & Discovery

AI Agents Don't Trust Your Website. They Trust the Publications That Covered You.

HBR just told boardrooms their brands aren't ready for agentic AI. The fix most executives are about to reach for is the wrong one. Here's what actually determines what AI says about your brand.

Jaxon Parrott|
AI Agents Don't Trust Your Website. They Trust the Publications That Covered You.

There's a Harvard Business Review piece landing in boardrooms this week. The headline: LLMs and AI agents are reshaping how buyers research and buy, and most companies aren't ready.

The story HBR leads with is Pernod Ricard. Gokcen Karaca, the company's head of digital and design, discovered that a major AI model had miscategorized Ballantine's Scotch — an affordable mass-market product — as a prestige offering. Wrong category. Wrong positioning. Wrong everything. He found this out after Jellyfish's consumer research documented that two-thirds of Gen Zers had already started using LLMs to research products, with more than half of Millennials doing the same.

Every executive who reads this piece will land on the same response: we need to audit what AI says about us, and then fix it.

That's the right impulse with the wrong solution attached.


The instinct is understandable. You discover a problem with your digital representation, so you fix your digital presence. It's the same reflex that produced ten years of website rewrites, SEO audits, and content overhauls. When the gap shows up, reach for what you can control.

The problem is that AI brand representation doesn't work the way your website does.

A February 2026 arXiv study analyzing how LLMs decide what to cite found that models favor certain sources at rates up to 27.4% higher than human researchers — not because those sources are objectively better, but because of what appeared in their training data. The models were trained on a specific universe of text. They learned which sources were credible from that universe. That preference is baked in. You cannot change it by updating your FAQ page.

Research accepted at AAAI 2025 reinforces why this matters: trust in AI-generated responses is directly correlated with what sources are cited. The publications that trained the model's sense of credibility are the same publications that make the AI's answer feel authoritative to the human reading it. Your website is not in that universe.

What this means practically: when an AI agent is asked to research a brand or compile a vendor shortlist, it doesn't crawl your website and form a fresh opinion. It draws on the coverage and editorial record that exists in publications it already treats as authoritative. What Forbes said about you. What TechCrunch reported. What your industry's go-to publications ran when your product launched. That's the input. Your website copy is not the input.


This distinction matters a lot more now than it did eighteen months ago.

McKinsey's January 2026 analysis of agentic commerce projected that AI agents could mediate between $3 trillion and $5 trillion in commercial transactions under moderate scenarios. That's not a projection about some distant future — McKinsey calls 2026 the year AI agents "stopped being an experiment and became part of how people shop." The B2B version is moving right behind it. Corporate buying, which involves multiple stakeholders and formal vendor evaluation, is increasingly folding AI agents into early-stage research. The agent doesn't close the deal. It builds the shortlist.

The agentic layer doesn't just change how consumers find you. It changes how procurement teams evaluate vendors, how partners assess credibility before a call. The AI agent doing that research pulls from the same editorial record as everything else.

If your brand doesn't have a strong editorial presence in the publications these systems trust, the agent's representation of you will be thin, absent, or — like Pernod Ricard's Scotch — wrong.

Being miscategorized as cheap instead of prestige is actually a best case. At least you're there. More often the failure mode is simpler: you're not mentioned at all. The agent compiles a category comparison, lists the brands it has enough coverage on to characterize, and your company is off the page.

You weren't there. The shortlist was made without you.


The brands that come out ahead won't be the ones that update their content most aggressively. They'll be the ones that figure out, early, where their editorial record is thin and start building it in publications that matter.

That's a different exercise than content optimization. It requires actual placement in actual publications — the kind that AI engines index, trust, and pull from. Not a brand blog. Not a LinkedIn post. A placement in an editorial outlet with a real journalist, a real editorial relationship, and a byline that carries weight. The kind that requires relationships, not just a content calendar.

We covered how the AI purchase funnel closed when Meta, ChatGPT, and Gemini all positioned between brands and buyers in the same week. And when enterprise AI visibility tools launched, we made the same point: the dashboards show you the gap. They cannot close it. Closing it requires editorial presence in the publications that shaped the AI's understanding of your category in the first place.

The audit tools that will flood the market after HBR's piece can tell you where you're invisible. They can confirm that Ballantine's is being miscategorized. What they can't do is fix it.

Related Reading


That's what Machine Relations names. The earned media mechanism that built brand credibility with human readers — placements in respected publications, through real editorial relationships — is the exact mechanism AI systems use to decide what to say about your brand. The reader changed. The mechanism didn't.

PR built authority with human researchers. Machine Relations builds it with machine researchers. The pathway is identical: earn a placement in a publication the reader trusts, and the reader forms an opinion based on what that publication said. Whether the reader is a journalist, a VP of Procurement, or an AI agent doing the first cut of a vendor review doesn't change what that placement does.

The brands that understand this in March 2026 will have a head start that compounds. Editorial records take time to build. AI systems pull from accumulated coverage, not last week's update.

If HBR just told your board that agentic AI is reshaping how you get found — the right question isn't "what do we need to update?" It's "what is our editorial record in the publications AI trusts, and where is it missing?"

Start with the free visibility audit and find out what AI is saying about your brand right now.