AI Visibility Went Enterprise Today. Citations Are Still Earned, Not Optimized.
Semrush just launched an enterprise AI visibility platform. Four other tools dropped this week. None of them can actually improve your citation rate, because citations aren't a setting you change in software.
Semrush launched an enterprise AI Optimization platform today, currently in closed beta and positioned for Fortune 500 accounts, built to track how brands appear across ChatGPT, Perplexity, and Google AI Overviews. The announcement bills it as "the first enterprise solution to help businesses track, control, and optimize brand presence across AI-powered search platforms."
Except the category is already crowded. Capxel defined something called "AI Search Optimization" three days ago. Brandi AI launched a structured AI visibility framework last week. Ridge Marketing shipped an AI visibility audit service Monday. Profound hit a $1 billion valuation last month doing the same basic thing at scale.
What's happening here is a measurement wave. Not a visibility wave. That distinction is going to matter more than any of these launch announcements will tell you.
Key takeaways
- AI visibility software measures your citation rate. It cannot improve it. Citations are determined by your editorial record in third-party publications, not by dashboard settings.
- Muck Rack's Generative Pulse analysis of millions of LLM citations found that 89% of AI citations come from earned media, not brand-owned content.
- AI citation patterns are recency-weighted. More than half of all journalism citations in AI answers come from articles published in the last 12 months. Placements today determine AI answers in Q3 and Q4.
- The competitive risk isn't the score you can see. It's the editorial placements your competitors are making now, which won't show up in any dashboard until the gap is already hard to close.
The threat the wave is responding to is real.
Gartner predicted in 2024 that traditional search engine volume would drop 25% by 2026 as users shifted to AI-powered answer engines. That prediction is playing out. The March 2026 issue of Harvard Business Review ran a piece on brand visibility in the agentic AI era that opened with a finding from Pernod Ricard. The company discovered that two-thirds of Gen Z consumers and more than half of Millennials now use large language models to research products before buying. When Pernod Ricard analyzed what the AI models were actually saying about their brands, the data was often incomplete or wrong. One model miscategorized an affordable mass-market scotch as a prestige product.
This is not a consumer goods problem in isolation. McKinsey published an analysis in February showing that agentic AI is actively reshaping enterprise procurement, not as a trend companies are watching but as something happening in current buying cycles. Procurement teams are running AI agents to shortlist vendors, evaluate pricing, and draft negotiation frameworks. The AI output during that process is drawing from sources those systems have been trained to trust.
If AI is doing the first cut of vendor research for buyers, your visibility in AI answers is a pipeline question. The growth teams who haven't internalized that yet are operating on an assumption that expired sometime last year.
Here is where the software wave gets it wrong.
The enterprise AI visibility tools launching this week are diagnostic products. They show you where your brand appears in AI answers, how often, what sentiment the mentions carry, and how you compare against competitors. The implicit promise is that visibility into the problem gives you the inputs to fix it.
It doesn't. Your AI citation rate is not a setting you adjust.
What determines whether ChatGPT or Perplexity cites your brand comes down to one thing: what authoritative third-party publications have written about you. AI engines don't construct answers from brand websites or owned content. They cite the sources that established editorial credibility with human readers for decades. Our analysis across 1M+ AI citations found that Forbes is the only traditional media outlet cited consistently across all 11 major B2B and B2C sectors. ChatGPT and Gemini lean heavily on Reuters, the Financial Times, Forbes, and Axios. The pattern holds: AI engines cite the publications that have been earning editorial trust with human readers for years.
Muck Rack's Generative Pulse research, which analyzed millions of LLM citations, found that 89% come from earned media, not brand-owned content. That number should reframe how you think about what "AI optimization" actually means. There is no on-page adjustment that changes it. You can add schema markup, clean up your content structure, and monitor your score every week. None of that affects whether the Financial Times has covered your company or whether TechCrunch has written up your category.
A low AI visibility score is a read on your editorial record. Software can measure it. Software cannot improve it.
What the tool wave is actually telling you about competitive risk.
Here is what should concern you more than your current score: the companies building earned media records right now will not appear in your competitive analysis for months.
AI citation patterns are recency-weighted. More than half of all journalism citations in AI answers come from articles published in the last 12 months. The brands getting placed in Forbes and TechCrunch this quarter are building the citation infrastructure that will determine AI answers in Q3 and Q4. By the time a competitor's AI visibility score shows up as a gap in your monitoring dashboard, the editorial record behind it is already well established.
This is the actual competitive risk. Not the score you can see today. The editorial placements your competitors are making now, which won't show up in any dashboard until the gap is already hard to close.
The Semrush AIO dashboard will show you the score six months from now. It won't show you who was building the record while you were watching the score.
Related Reading
- AI Visibility for Growth-Stage Startups (Series A–B): The 2026 Earned Media Playbook
- AI Visibility for Fintech Companies: How to Get Cited by ChatGPT, Perplexity, and AI Search
What this era is actually called.
PR got one thing exactly right: earned media in authoritative publications has always been the strongest trust signal a brand can build. What PR got wrong was the operating model. Retainers billed regardless of results. Cold pitch campaigns that flooded journalist inboxes and made editors harder to reach over time. Agencies that scaled headcount rather than the relationships that actually determine placement rates.
Machine Relations is what happens when you keep the mechanism and rebuild everything broken around it. The same earned media placements that build brand credibility with human readers are the citations AI engines pull when a prospect asks who the most credible option in your category is. The publications haven't changed. The reader has. That's why an entire category of AI visibility software now exists — and it's also why software alone cannot solve what that category is trying to address.
You can monitor AI answers forever. What improves your position in them is a consistent record of placements in publications the AI systems have been trained to trust.
If you want to know where your brand actually stands in AI answers right now, and which publication gaps are creating the most exposure, the visibility audit will show you the specific picture.