There Is No AI Ranking. There's Only What ChatGPT Was Trained to Know About You.
Marketing teams are spending $100M+/yr tracking AI brand rankings. GPT-5.3's model card, released today, explicitly says ChatGPT is moving away from web link lists toward synthesis from its own trained knowledge. That knowledge was built from earned media, not SEO.
Marketing teams are spending $100 million per year tracking their brand's position in AI search results. OpenAI published a model card this morning explaining, without naming the industry, exactly why that's the wrong game.
GPT-5.3 Instant dropped today. The update got coverage for its hallucination reductions — 26.8% fewer with web search, according to OpenAI's system card — and better conversational tone. But the sentence that matters most for anyone thinking about brand visibility sits here, in OpenAI's own words:
"GPT-5.3 Instant is less likely to overindex on web results, which previously could lead to long lists of links or loosely connected information. It does a stronger job of recognizing the subtext of questions and surfacing the most important information, especially upfront."
ChatGPT's most-used model, serving over 200 million weekly active users, is explicitly engineered to produce fewer web search results and do more synthesis from the model's own trained knowledge and reasoning. That's not a minor UX adjustment. It's OpenAI telling you, in product language, that the model is moving away from the thing the AI visibility tracking industry built itself around.
The list was already a lottery
The AI visibility tracking industry was built on a premise that seemed reasonable: get your brand into the list when ChatGPT searches the web. Same logic as Google SEO. Rank in the results. Drive the impression.
Rand Fishkin's SparkToro ran the most rigorous study to date on whether that list is even real. 600 volunteers. 2,961 runs across ChatGPT, Claude, and Google AI. Twelve different prompts. The finding: AI brand recommendations are so randomized that there is less than a 1-in-100 chance of seeing the same list in any two runs of the same prompt. The ordering is closer to 1-in-1,000.
Built on a Carnegie Mellon framework for measuring LLM consistency, the research concluded that tracking AI rankings is measuring noise, not brand authority. The study put it plainly: these aren't endorsements. They're tokens that follow other tokens in a statistical model. The list is essentially a lottery draw from a pool of candidates, and the pool shifts with every run.
So two things are now simultaneously true. The list you've been tracking is random. And GPT-5.3 is producing that list less often.
What the model actually draws on
When ChatGPT synthesizes an answer without pulling from a web search (which is how it's now designed to operate more frequently), it's drawing on what the model already knows. That knowledge was built during training. And that training data came from publications.
Not every publication. The AI training corpus draws heavily from sources that established credibility over years: major outlets, high-authority industry publications, editorial sites with long track records of being cited and indexed. A placement in Forbes, TechCrunch, or Harvard Business Review does something most marketers haven't thought about: it gets indexed in the same corpus that trained the model's understanding of who the authoritative voices are in your category.
This is not theoretical. It's how AI search engines select their sources. The same credibility weighting that drives editorial selection is what AI engines use to decide which sources earn citations. And it's why Forbes coverage measurably improves AI search visibility for the brands earning it.
The model isn't pulling from your latest LinkedIn post or your blog's domain authority score. It's drawing from the layer of editorial infrastructure that trained it, and for most companies, that layer is either thin or built on the wrong things.
The question you should actually be asking
Most of the conversation in the marketing and AI visibility space is about the wrong variable. "What's our ranking in ChatGPT?" is a question about a list that SparkToro just proved is essentially random. GPT-5.3 is playing that game less often.
The right question is: when someone asks ChatGPT for a recommendation in your category, without triggering a web search, when the model answers from its own knowledge, does your brand come up?
That answer is downstream of where your company has earned placements over the last two to five years. Not ads. Not technical SEO. Earned media in the publications that shaped the model's understanding of your space.
Most companies don't know the answer to that question. Their marketing teams are watching dashboards that measure a lottery while the gap between brands embedded in AI's trained knowledge and brands that aren't grows a little wider each week.
The window for building that presence is not closed. But the compounding math here is not friendly to late movers. Earned media is a relationship game with a long time horizon. You don't build editorial presence in Q3 and see AI citation results in Q4.
The mechanism behind this era
This is what Machine Relations describes: the discipline of building the editorial presence AI engines already trust. Earned media in respected publications was always the mechanism that built brand authority with human buyers. It turns out AI engines were trained on the same publications, weighted the same credibility signals, and make citations from the same authoritative infrastructure.
What changed is the reader. The PR model that earned those placements — retainer fees, cold pitching, metrics that measured reach without measuring results — was always broken. Machine Relations keeps the mechanism and rebuilds what was broken around it: outcome-based, relationship-first, direct access to the editorial relationships that actually move citations.
Brands that get this are building something that compounds. Every placement in a trusted publication reinforces what the model already understands about their authority. GPT-5.3 drawing more on that trained knowledge base and less on a randomized web search isn't a threat to those brands. It's the whole point.
For everyone else: the rankings dashboard is measuring noise. The model just moved further from the web list and closer to what it already knows. And what it knows was built long before your team started tracking it.
Want to see where your brand actually stands in AI search? Run the visibility audit.