Morning BriefAI Search & Discovery

L'Oréal and Unilever Just Solved Their AI Visibility Problem. They Solved the Wrong One.

Five of the world's largest consumer brands adopted a new protocol to control how AI agents represent them. It handles product data. It doesn't handle the thing that determines whether AI recommends you.

Jaxon Parrott|
L'Oréal and Unilever Just Solved Their AI Visibility Problem. They Solved the Wrong One.

Five of the world's largest consumer brands — L'Oréal, Unilever, Mars, Beiersdorf, and Reckitt — just adopted something called the Agentic Merchant Protocol. Azoma, the company behind it, is hosting what it calls "the world's first Agentic Commerce Optimization event" in London tomorrow to formalize the rollout.

The logic is clear. AI agents — ChatGPT Shopping, Amazon Rufus, Google Gemini, Perplexity — are increasingly the first point of product discovery for consumers. If your product data is structured wrong, or your pages can't be crawled, or your attributes are thin, AI agents either represent you inaccurately or skip you entirely. The protocol gives brands a single system to define and distribute product intelligence across those agent ecosystems.

Ruroc, a $50M D2C helmet brand, used this type of optimization to become "consistently the #1 most recommended Ski & Snowboarding Helmet brand by ChatGPT" — with 14x the ChatGPT traffic they had before, according to their founder. Perfect Ted, a matcha brand, attributed part of a +532% year-over-year revenue increase to the same approach.

Real numbers. Real strategy. And incomplete in exactly the way that will matter most as the AI buying layer matures.


What the protocol does

AMP is fundamentally a structured data play. It helps brands close what Azoma calls "GEO blockers" — schema errors, crawlability gaps, JavaScript-only product pages that AI agents can't parse. It helps brands define canonical product intelligence: the authoritative source of truth for what a product is, what it costs, how it compares. Then it distributes that intelligence across the agent ecosystem.

This is a real problem worth solving. If AI agents can't read your product data correctly, they'll either misrepresent you or skip you for a competitor they can read. That's an obvious loss.

But structured product data and editorial trust are two different things — and AI systems treat them that way.


The layer that actually drives recommendations

When an AI agent evaluates two competing products and issues a recommendation, the decision isn't purely about which product data is cleaner. It's about which brand the AI has learned to view as credible.

Research published on arXiv last week — Diagnosing and Repairing Citation Failures in Generative Engine Optimization — analyzed why documents fail to be cited by AI systems. The finding: citation failures are systematic, not random. Documents fail because of specific authority gaps at different stages of the citation pipeline, and generic content optimization can't fix an underlying credibility deficit.

Yotpo's research puts a number on it: third-party mentions in news outlets are roughly 3x more correlated with AI visibility than traditional backlinks. Brand-owned content — including the kind of product intelligence AMP optimizes — sits at the bottom of the trust hierarchy. Earned media in authoritative publications sits at the top.

This is why Perplexity's executives framed their business model decision the way they did when they killed the ad business in February. An executive told the Financial Times: "We are in the accuracy business, and the business is giving the truth, the right answers." Their citations policy reflects the same logic: "We can get the summary of that somewhere else, but we cite, we always try to cite that original source." That original source is not a product page. It's a publication the AI has learned to trust.

The mechanism behind this is not new. AI systems were trained on the same publication ecosystem that shaped human brand perception for decades. They learned what human readers learned to trust: independent editorial coverage in respected publications, not brand-authored product descriptions. When an AI evaluates your brand's credibility, it draws on years of publication history — Forbes, TechCrunch, Reuters, trade journals specific to your category — and weights that coverage far more heavily than any product schema you distribute.

AMP solves the data layer. It doesn't touch the trust layer. And the trust layer is what determines whether AI says "this brand is worth recommending" versus "here are this brand's specifications."


Why B2B is a harder version of this problem

L'Oréal has decades of editorial coverage. Unilever has been written about in every major business publication longer than most of their category managers have been alive. Even with gaps in product schema, AI agents know who these brands are.

A B2B software company or professional services firm starts from a completely different position. The brand recognition that gives L'Oréal a baseline trust score in AI systems usually doesn't exist. And the buying context is different in ways that make the trust layer even more consequential.

When a consumer asks ChatGPT what helmet to buy, the AI is making a product recommendation. When a procurement lead or founder asks an AI to evaluate vendors in a category, the AI is making a trust judgment. It's not comparing specs — it's surfacing which companies have the kind of track record enterprise buyers can stake their reputation on. The arXiv citation research is relevant here: AI systems are specifically evaluating authority signals, not just data completeness, when they decide what to surface.

The practical result: a B2B brand with clean product data and zero editorial presence will beat a B2B brand with messy product data and zero editorial presence. But they'll both lose to a competitor that has both. And the competitor with strong earned media coverage in authoritative trade publications will dominate the responses that matter most — where the AI is issuing a recommendation, not just listing options.

Most B2B brands haven't solved either layer. Many are now learning the product data layer exists. Very few have thought through what it means that AI systems weight editorial credibility the same way well-read buyers always did.


The gap L'Oréal's strategy reveals

What's worth noting about the AMP rollout isn't that the strategy is wrong. It's that five of the most sophisticated marketing organizations in the world are publicly treating product data as the primary AI visibility problem to solve.

If that's where companies with nine-figure marketing budgets are starting, it tells you something about where most B2B brands stand. They're starting from further behind, with fewer of the brand recognition advantages that make data optimization tractable, in a buying context where trust signals matter more than product specs.

Product data is table stakes. Getting clean, machine-readable product intelligence in front of AI agents is a prerequisite for being found. In B2B, the difference between being found and being recommended is almost always determined by the editorial record — what publications covered you, how they framed your expertise, whether the citation ecosystem treats your brand as an authoritative source or an unknown entrant.

That record is what Machine Relations addresses — not the schema, not the crawlability audit, but the earned media layer that AI systems draw from when they decide who to recommend. PR's mechanism always worked: a placement in a respected publication carries a trust signal that brand-owned content can't manufacture. The reader changed. The mechanism didn't.

AMP is a product. A trust layer is a track record. You can't buy a track record. You build one — ideally before your competitors do.

Run your AI visibility audit to see where you stand in the editorial layer before you invest in the data layer.

Related Reading


Related reading