Afternoon BriefMarketing Strategy

Your Best Buyers Are Researching You Behind a Firewall. Your Analytics Show Nothing.

61% of B2B buyers use private AI tools their employers issued. Those sessions never touch your analytics. Here's what that means for how you earn visibility before buyers ever reach you.

Christian Lehman|
Your Best Buyers Are Researching You Behind a Firewall. Your Analytics Show Nothing.

You've probably noticed your traffic from AI tools is disappointing. ChatGPT referrals trickle in. Perplexity sends a few clicks. You run the numbers and conclude AI search isn't a real channel yet.

The conclusion is wrong. You're measuring the wrong thing.

Forrester's Buyers' Journey Survey, 2025 — published in January 2026 — found that 61% of business buyers use private AI tools provided by their employer. Microsoft Copilot, private ChatGPT instances, Gemini behind a corporate SSO login. B2B buyers are four times more likely to use Microsoft Copilot than consumers, and more than half of those users are running private instances, behind their company's firewall, on infrastructure your analytics will never see.

Those sessions don't produce referral traffic. They don't fire pixels. They don't touch your CRM. The buyer forms an opinion about your brand, shortlists two or three vendors, and visits your site days later to confirm what they already decided.

By the time they show up in your funnel, the decision is already half-made.

What private AI research actually looks like

When a senior buyer at a 500-person company runs vendor research today, it often starts inside Microsoft Teams with Copilot. They type something like: "Compare the top AI visibility platforms for a B2B SaaS company our size." Copilot synthesizes whatever it can access — public web, enterprise-indexed content, SharePoint if it's integrated — and returns an answer.

That answer is not drawn from your ad spend. It's not drawn from your landing page. It comes from the editorial ecosystem around your brand: what publications have said about you, what benchmarks include you, what comparisons name you in the right context.

Forrester's data makes the urgency concrete. Generative AI is now the single most cited meaningful source of information for B2B buyers — above vendor websites, product experts, and sales reps. At the same time, 20% of buyers report they've been less confident in decisions because of inaccurate AI-generated information. That mistrust doesn't reduce AI use. It adds more validation steps — which is why buying groups now average 13 internal stakeholders and nine external influencers, per Forrester's State of Business Buying, 2026.

More research happens before buyers talk to you. More of it is invisible.

The four gaps this creates

Absence is the same as a no. If your brand doesn't appear in a private Copilot answer about your category, you were never considered. There's no page two in an AI response. The buyer moves on.

Inaccuracy spreads without correction. When AI tools describe your brand incorrectly — wrong positioning, wrong use case, outdated differentiators — that description circulates inside a buyer's organization across multiple research sessions before anyone talks to you. You get a discovery call where the buyer's mental model is already wrong.

Zero visibility into the research phase. Traditional attribution models assume the buyer starts on Google. They don't anymore. Nearly half of buying group members cited generative AI as a meaningful source before any vendor touchpoint. You cannot retarget what you can't see.

Your competitors may already be better positioned. If a competitor appears regularly in your category's earned media — Forbes pieces, TechCrunch comparisons, industry analyst write-ups — AI tools have cleaner signals to pull from. Better editorial presence means better representation inside the tools buyers use daily.

The four moves that actually work

Run the prompts your buyers run. Start with the buyer-intent queries they'd actually use: "Best [category] platform for [company size]," "How does [your brand] compare to [Competitor A]?" Run them in ChatGPT, Perplexity, and Google AI Overview. Note whether you appear, where you appear, and how you're described. If the description is wrong, that's not a UX problem. What AI says about your brand is downstream of what trusted publications have written about it.

Target the format AI tools actually pull from. Private AI research doesn't pull heavily from your own site. According to Muck Rack's analysis of over one million AI prompts, more than 85% of non-paid AI citations come from earned media sources — third-party editorial coverage, not brand-owned content. And according to analysis of over 2,500 unique domains cited by AI platforms, listicle-format content — "Top N" comparisons and roundups — accounts for nearly 60% of all cited URLs. Your team is probably writing articles instead. If your PR effort produces profiles and announcements, you're building coverage AI largely ignores when buyers ask comparison questions.

Build topic clusters, not campaigns. A single placement in a relevant outlet is a data point. Three to five placements across different publications on the same category claim start to make you the default answer. AI tools treat corroboration across sources as a trust signal — multiple independent credible sources confirming the same thing about your brand matters far more than one landmark placement. Coverage velocity across outlets beats any single notable mention.

Fix inaccurate positioning before the discovery call. The buyer showing up with a wrong mental model is a pre-sales problem. Earned coverage that explicitly addresses your category, use case, and differentiation — placed in the publications AI tools cite — is how you correct the record before the conversation starts. Trying to fix it on the call is already a recovery conversation. The most common citation failures come down to coverage that sits in the wrong formats or wrong outlets — both solvable with the right editorial targeting.

The measurement shift

Stop optimizing for AI referral traffic. It's not the signal. Private AI sessions are invisible by design, and even public AI tools generate mostly zero-click interactions — buyers read the answer without clicking through.

The metric that matters is citation presence. How often does your brand appear as a named answer for the prompts your buyers run? That's what you measure in 30-day intervals, not to replace traffic reporting, but alongside it. Citation share tells you what's happening in the research phase. Traffic tells you what happened after.

Forrester's conclusion is direct: "The marketing model that has worked in the past — driving traffic to your site to retarget and nurture prospects — will be much less effective. Buyers will spend more and more of their buying process with AI answer engines." You cannot fix that by publishing more landing pages. You fix it by earning the editorial presence that AI tools treat as authoritative.

Why this is an earned media problem

The visibility gap is not a technical fix. It's not schema markup or FAQ pages — those have marginal value for the discovery queries buyers actually run. The gap closes when credible third-party publications describe your brand accurately, in the formats and contexts that AI tools cite. A December 2025 study by Stacker and Scrunch measuring 944 prompt-platform combinations found that distributing content across a diverse set of third-party news outlets increased AI citation rates by 325% — from 8% to 34%.

That's what Machine Relations describes: the discipline of building the earned media infrastructure that makes your brand legible, accurate, and citable inside the AI systems your buyers consult before they find you. The publications that shaped how buyers thought about vendors for twenty years are the same publications AI tools treat as authoritative. The reader changed. The mechanism didn't.

The buyers researching you behind corporate firewalls right now are consulting tools that run on the same editorial graph. The question is what that graph says about your brand — and whether the answer is accurate.

Run a visibility audit to see how you currently show up in AI answers, and where the gap is.

Related Reading