Morning BriefAI Search & Discovery

Sam Altman Says AI Adoption Is 'Surprisingly Slow.' More Than Half Your B2B Buyers Disagree.

OpenAI's own CEO called enterprise AI adoption 'surprisingly slow.' His COO agreed. But 56% of tech buyers already use chatbots as their primary source for vendor discovery. These two facts aren't in conflict, and the gap between them is where the decision gets made.

Jaxon Parrott|
Sam Altman Says AI Adoption Is 'Surprisingly Slow.' More Than Half Your B2B Buyers Disagree.

Sam Altman said something in late February that a lot of founders and growth leads leaned on as a reason to wait. The resistance to AI's absorption into the economy, he told the New York Times, has been "surprisingly slow", greater than he expected. One week later, his COO Brad Lightcap told a room in New Delhi: "We have not yet really seen enterprise AI penetrate enterprise business processes."

If you read those two statements as permission to wait, you read them wrong.

These aren't assessments of buyer behavior. They're statements about organizational process transformation: the integration of AI into formal enterprise workflows, procurement systems, multi-team coordination infrastructure. That layer is still early. Lightcap launched OpenAI Frontier specifically to address it, then signed BCG, McKinsey, Accenture, and Capgemini to push it into the Global 2000.

Meanwhile, a Responsive study of more than 350 B2B buyers worldwide, published in October 2025, found that one in four B2B buyers already use generative AI more than conventional search when researching suppliers. Among technology and software buyers specifically, 56% say chatbots are now their top source for vendor discovery. Large enterprises, the exact segment Lightcap is trying to integrate AI into at the process level, are ahead of smaller firms: 42% already depend on AI for vendor discovery.

The buyers aren't waiting for enterprise AI to happen to them. They're already doing it on their own. That's what OpenAI's $20 billion in annualized revenue actually reflects: consumer and professional-tier usage at enormous scale, while the enterprise process layer Lightcap is describing is still being built.


Here is the gap that matters.

When Lightcap says enterprise AI hasn't penetrated business processes, he means the formal organizational layer: the procurement workflows, the ERP integrations, the AI-assisted vendor evaluation and RFP systems that run across a company of thousands. That transformation is slow, complex, and still ahead.

But buyers don't wait for their company's AI strategy to start researching vendors. They use personal ChatGPT. They open Perplexity on their phone. They ask Claude to compare your company against two competitors before they ever reach out to sales. This is how AI agents now handle the early stages of B2B vendor discovery, not as a formal enterprise deployment, but as individual buyer behavior that's already normalized.

The Responsive data backs this up in detail. Only 10% of buyers do minimal research before contact. More than a third perform detailed comparisons. Another 18% review financial data and case studies before engaging with a single sales rep.

This research is happening in personal AI sessions, not enterprise procurement systems. Enterprise process AI is slow. Buyer research behavior changed already.

So when a prospect at a $500 million company asks ChatGPT who the best option is in your category, the AI doesn't check your ad spend or your campaign budget. It pulls from publications it was trained to trust. If your brand has earned placements in those publications, you appear. If it hasn't, a competitor already has that conversation without you.


The misread that costs founders the most right now is treating AI visibility as a 2027 problem, something to address after enterprise adoption "really arrives." The logic sounds reasonable: if Altman himself says penetration is surprisingly slow, there must be time.

There isn't.

The moment that determines whether your brand appears in AI answers is already happening. It's the personal research session your buyer runs before they reach out. By the time formal enterprise AI procurement is live, when AI agents are handling vendor shortlisting and RFP scoring across large organizations, the citation infrastructure in trusted publications either exists or it doesn't.

Building it takes time. Earning a placement in Forbes, Harvard Business Review, or TechCrunch requires editorial relationships and genuine credibility. Those don't come online in a week or a quarter. The brands accumulating that presence now will be the default options when enterprise procurement AI formalizes. The ones that waited will be starting from zero in a market where the defaults are already set.

OpenAI spent years building toward its current default status before enterprises formally integrated it. That positioning wasn't the result of advertising. It was the result of presence, coverage, citations, credibility from real editorial sources. The brands that show up in AI answers in 2026 will be the defaults when the Frontier Alliance integrations go live in 2027 and 2028.


There's a reason AI engines cite publications rather than brand websites, campaign landing pages, or owned social content. These systems were trained on the same editorial ecosystem that shaped human brand perception for decades. The Forbes article, the Reuters mention, the industry journal placement, those signals carried weight because they represented third-party credibility no brand could manufacture on its own. AI engines inherited that same judgment.

The discipline of ensuring your brand is cited by AI systems rather than passed over is Machine Relations, and the mechanism it runs on is the same mechanism that made PR worth having in the first place. Earned media placements in publications AI engines trust. The reader changed. The mechanism didn't.

PR's original insight was right: a placement in a publication your buyer reads and respects is worth more than any ad you could run in the same space. That's more true now, not less, because AI systems pull citations from those same publications when constructing answers for your buyers.

What PR got wrong, the retainer model, the cold-pitch approach that floods journalist inboxes, the agencies that scale headcount instead of relationships, those failures are compounding now. As more brands try to earn coverage to build AI citation authority, the inbox-flood problem gets worse and the editorial relationships that actually produce placements get harder to access. Direct relationships with editors aren't a nice-to-have. They're the differentiator.

Related Reading


Lightcap is right that enterprise process transformation is slow. He's also right that it's coming. When those integrations are live and AI is formally embedded in how large companies evaluate and procure from vendors, the citation ecosystem that determines what AI surfaces will already be in place.

The window isn't closed. But it's not static either. Every quarter of inaction is a quarter of citation infrastructure your competitors might be building instead.

If you want to see where your brand currently shows up when buyers search your category in AI, the visibility audit starts with the picture as it actually is, not how you hope it looks.