The 30-Minute Audit That Shows What AI Actually Says About Your Brand
Most growth teams check citation count and move on. Here's the specific audit that reveals whether AI is giving your buyers accurate information — and the earned media steps to fix the gaps that cost deals.
The head of digital at Pernod Ricard didn't expect to find much wrong. He was running a routine check on how AI models described his company's liquor brands. What he found was a mess. One model was miscategorizing Ballantine's, an affordable mass-market Scotch, as a prestige product. The descriptions were incomplete. The positioning was off. None of it was how Pernod Ricard wanted buyers encountering their brands for the first time.
He commissioned a formal audit. What he found is now in Harvard Business Review's March-April 2026 issue: most brands haven't run this check. The ones that have are finding gaps they didn't expect.
If you're a growth exec or founder who has checked whether your brand appears in AI answers and called it done, this is the step you haven't taken.
Why accuracy matters more than presence
The default metric is citation count. Is your brand mentioned when someone asks ChatGPT or Perplexity about your category? If yes, it feels like a pass. The problem is that presence and accuracy are different problems, and for most B2B companies the second one is more damaging.
McKinsey's January 2026 research on agentic commerce puts the stakes plainly: under moderate forecasts, AI agents could mediate between $3 trillion and $5 trillion in commerce as buyers increasingly delegate research and vendor comparison to AI. That's not a distant future. Buyers are already using ChatGPT and Perplexity to shortlist vendors and build comparison frameworks before a sales conversation starts.
When they do, the AI answers they get are only as good as the sources feeding them. And those sources are not your website.
Perplexity alone processed 780 million search queries in May 2025, growing at 20% month-over-month according to Bloomberg's reporting from the Bloomberg Tech conference. It is now a primary research channel for B2B buyers, not a novelty. Getting the AI description right in Perplexity isn't a visibility optimization — it's sales infrastructure.
What AI is actually reading
A February 2026 arXiv paper on LLM citation behavior confirmed what search practitioners have been observing for months: AI systems weight credentialed external sources (news publications, institutional research, academic content) far above brand-owned content when generating answers. Your product pages and blog posts are not the input layer for AI answers about your category. The publications that have covered you are.
That means your brand's AI representation is downstream of your editorial presence, not your content calendar. A description of your product in TechCrunch carries weight in AI answers that a hundred well-optimized website pages cannot replicate.
So you're not just checking whether you appear in AI answers. You're checking what the publications feeding that appearance actually say about you. (For a deeper look at how Perplexity's specific source selection logic works, this breakdown of Perplexity's three-layer reranking system maps exactly which domain and topic signals drive citation weight.)
The audit: three steps, 30 minutes
Step 1: Run the prompt battery
Open ChatGPT and Perplexity. Run each of these prompts, adjusted for your category:
- "What are the best [your category] tools for [your target buyer type]?"
- "Compare [your company] to [top competitor] for [use case]"
- "What does [your company] do and who is it for?"
- "Who are the leading [your category] companies in [your market segment]?"
For each response, record whether you appear, how you're described, and which sources are cited for that description. Save the citations. You want the source URLs, not just the answer text.
Step 2: Trace the source layer
Take the citations from step 1 and read the actual articles — not the AI summary of them.
You're looking for three specific failure modes. The first is missing coverage: a top competitor appears because TechCrunch ran a piece on them last year, and you don't appear because no equivalent piece exists in a publication AI engines pull from. The same arXiv research on LLM citation preferences found that citation weight concentrates heavily in a small set of trusted domains — publications that AI systems have learned to treat as credible anchors. If your category's top publications haven't covered you, you're structurally absent from the AI answer for that category. This is the most common gap and the most fixable.
The second is stale or wrong descriptions. A piece from 18 months ago describes what you were, not what you are. If you've pivoted, narrowed your ICP, or launched a core product since that piece ran, AI answers built on it will describe the old version of your company. Buyers encountering that description won't recognize the company you're pitching.
The third is category mismatch, which is what caught Pernod Ricard off guard. Ballantine's got described as prestige because a handful of luxury publications had covered it, tilting AI's understanding of the brand. In B2B, this shows up as being described in a market segment you don't serve, or being lumped into a competitor's category rather than your own.
Step 3: Score by impact and prioritize
Not every gap matters equally. Rank what you found by one question: if a buyer encountered this AI description before your first sales call, would it help or hurt?
Missing coverage in a publication that generates heavy AI citation for your category is the highest-priority fix. Stale positioning in a well-cited piece comes second. Category mismatch is third and worth flagging as an active sales risk, since your SDRs are probably already dealing with the confusion it creates downstream.
You're identifying the one or two earned media placements that would most change the description a buyer or AI agent gets when they research your company. That's your target list.
One way to frame the priority decision: McKinsey's February 2026 analysis of agentic AI in procurement describes the shift from "show me the data" to "do it for me" — agentic AI that carries out multi-step vendor evaluation on behalf of buyers. At that level of delegation, the AI agent reads your editorial presence the same way a human researcher would, but faster and without the patience to distinguish between who you are now and who a two-year-old article says you were.
The mistake most teams make at this point
They run step 1, see they're mentioned in four of five prompts, and call it a pass. This is the Pernod Ricard trap. Presence feels like success.
"Mentioned" and "accurately described" are not the same thing. In a B2B sales context, being described as the wrong type of company, serving the wrong market, or missing your key differentiator is worse than being absent. It creates a false floor that sales has to correct every first call. The audit is worth nothing if it stops at presence. What you're looking for is whether the description would help a serious buyer or confuse them.
Why earned media fixes this at the infrastructure level
There is no way to directly edit what AI says about your brand. You cannot submit corrections to ChatGPT. The only lever is the sources it pulls from, and those sources respond to the same thing they've always responded to: earned media placements in publications they trust.
This is the operational core of Machine Relations: a placement in a publication AI engines index and trust updates the information layer those engines pull from. A well-placed profile or feature in a publication that leads your category's AI citation stack does more for your brand's AI representation than a year of on-site content work.
PR's original mechanism (earn coverage in credible third-party publications) always worked for human readers. It works the same way for machine readers because the sources didn't change. What changed is who's doing the reading and how much that reading now shapes the buyer's first impression before they ever reply to your outreach.
If the audit in step 3 tells you which publications would most change your AI description, those are the placements to prioritize. One placement in the right publication updates your AI representation in a way that compounds for every buyer who researches you afterward.
Related Reading
- AI Visibility for Media & Entertainment Companies: The 2026 Earned Media Playbook
- AI Visibility for Fashion: The 2026 Earned Media Playbook
Run the audit this week. Then check authoritytech.io/visibility-audit for a structured version that maps your current citation sources and flags the specific coverage gaps driving inaccurate AI answers for your category.