Buyers trust AI search for discovery. They verify with editorial sources after. Those are the same publications.
Forrester surveyed 18,000 business buyers. 94% use AI in the buying process. Most of them then go validate what the AI told them against trusted editorial sources. Your brand needs to be in both answers — and they run on the same mechanism.
Forrester surveyed nearly 18,000 business buyers for its State of Business Buying report in January. Ninety-four percent of them use AI during their buying process. That number ran through every marketing newsletter for two weeks. It was treated as proof that AI search optimization is now mandatory.
The finding directly after it got almost no coverage.
Buyers distrust the AI.
"AI-powered search tools offer speed and efficiency, but they can create mistrust by delivering incomplete or unreliable information," Forrester's report states. "Buyers compensate by seeking validation from trusted sources: peers, product experts, industry analysts, and others within their buying networks."
Two steps. The AI gives them a list. Then they go check the list.
Most brands are building strategy around the first step while the second step undoes them.
What the AI step actually does
When a buyer runs a vendor query through Perplexity, ChatGPT, or any enterprise-deployed AI research tool, the output comes with citations. Those citations aren't random. They come from publications that the model has indexed and treats as authoritative — tech press, trade media, analyst reports, category publications that have covered the space over years.
The brands that appear in those citations got there through earned media placement. Not because their website has better structured data or more comprehensive schema markup. Because real editors at real publications wrote about them, and AI engines weight those editorial sources heavily when constructing answers.
This is the part most AI visibility strategies are actually oriented toward — getting cited in the AI's first-pass answer. The content optimization, the llms.txt files, the structured data audits. All of it is trying to influence which brands come back when a buyer types a vendor query.
That's worth doing. But it's half of what's happening.
The step that comes after
Forrester's data is specific about what buyers do when they don't fully trust the AI's output. They go to trusted sources. Not the vendor's own content. Not another AI query. Publications, analysts, and editorial sources they already treat as credible.
The average B2B buying decision now involves 13 internal stakeholders, according to the same report. Procurement professionals serve as decision-makers in 53% of buying cycles and engage from the beginning of the process. When procurement runs a validation pass on the AI's vendor list, they're going to the publications and analysts they've read for years. The same ones that shaped the AI's answer in the first place.
This is the dynamic that the AI optimization conversation keeps missing. The AI gives you the shortlist. Then buyers reach for editorial sources to verify it. The brands that appear in both answers didn't build two separate visibility strategies. They placed their brand in trusted publications, and that placement covered both surfaces.
MIT Sloan reviewed this gap directly in a piece that circulated through marketing teams last year. A major U.S. fitness brand with one of the larger search investments tested what happened when they ran their own queries through AI tools. A small local company from Houston was landing better results. The brand that spent years and significant budget building traditional search authority had no AI-era editorial presence, and the AI didn't know it existed.
That gap isn't a content problem. It's a placement problem.
The single mechanism underneath both
Procurement is now running on AI agents. McKinsey's February 2026 analysis on agentic procurement described the shift from "show me the data" to "do it for me" — AI agents comparing vendor offerings before a human has reviewed a single deck. Perplexity's Computer for Enterprise, now deployed inside Slack at enterprise scale, is executing exactly that workflow.
The companies that appear in those agentic research outputs are the same ones that appear when a buyer then manually goes to verify the AI's recommendation. The corpus that drives both is the same: trusted third-party editorial coverage in publications that AI systems and human buyers both treat as authoritative.
This is what Machine Relations was built on. Earned media is PR's core mechanism — the signal that always drove both discovery and credibility. The machine readers changed. The mechanism didn't. A brand placement in a publication that Forbes' editors or TechCrunch's reporters decided was worth covering is the same signal that tells an AI engine to cite it in an answer, and tells a VP of Procurement that the AI's answer was probably right.
The GEO industry treats this as a content optimization problem. More structure, better markup, sharper schema. That's the wrong layer. Perplexity's enterprise research outputs favor brands with editorial presence in trusted publications — because that's the same signal Perplexity was built to surface. Structured content on a brand's own site doesn't carry the same weight as a placed feature in a publication the AI engine already indexes as credible.
The brands Forrester's buyers are finding in AI search and then confirming through editorial validation aren't running two separate visibility strategies. They have placement. One thing covers both steps.
If you want to see where your brand sits across both surfaces right now — what AI answers currently say about you and where the editorial gaps are — the visibility audit gives you a clear read.