How to Correct What AI Says About Your Brand
Machine Relations

How to Correct What AI Says About Your Brand

When ChatGPT or Perplexity gets your brand wrong, there's no report button. Here's the mechanism that actually changes what AI says about your company.

The CMO of a mid-market SaaS company asks ChatGPT to describe their company. The answer comes back factually wrong: wrong category, wrong customer type, and wrong founding story. She screenshots it and sends it to the CEO: "We have a problem." The CEO asks the obvious question: how do we fix this?

It's a question more executives are asking. A 2026 Harvard Business Review study found that two-thirds of Gen Zers and more than half of Millennials now use AI models to research products. What those models say is shaping buying decisions before a prospect ever visits your website. And the AI data, as Pernod Ricard discovered when they audited their brands, is "often incomplete or incorrect." In their case, one popular AI model miscategorized Ballantine's Scotch as a prestige product when it's an affordable mass-market offering.

The instinct is to look for a correction mechanism. Submit feedback. Flag the response. Contact the AI company. Those instincts lead to dead ends. What actually changes what AI says about your brand is a different mechanism entirely, one rooted in where AI systems get their information and why they trust certain sources over others.

This article explains what that mechanism is, why direct correction doesn't work, and what a company with a wrong AI narrative should actually do about it.

Key takeaways

  • AI models generate brand descriptions from training data and live retrieval, not from your website alone. What they say is downstream of what trusted third-party sources have said about you.
  • There is no direct correction pathway. AI companies don't accept brand updates, and reporting wrong answers changes nothing structurally.
  • Research confirms AI search engines show "a systematic and overwhelming bias towards earned media (third-party, authoritative sources) over brand-owned content."
  • The fix is editorial: placements in the publications AI models treat as authoritative create a citation base that corrects the record over time.
  • This takes months, not days. Consistent editorial presence in multiple trusted publications is the mechanism, not a single placement or press release.
  • The same mechanism that builds AI credibility also builds credibility with human readers. These goals work together.

Why AI gets your brand wrong

AI models don't have a live database of company facts they pull from when someone asks about your business. What they know, and how accurately they know it, depends on what was in their training data, what gets retrieved when they generate an answer, and how authoritative the sources behind that information appear to be.

The training data problem is structural. Large language models are trained on massive web crawls. FineWeb, one of the datasets used to train open LLMs, derives 15 trillion tokens from 96 Common Crawl snapshots, essentially snapshots of the public web at various points in time. If your brand was described incorrectly or incompletely in the sources that made it into those crawls, that description becomes the model's starting point for any answer about you.

The retrieval problem compounds this. Many AI search systems now augment their base knowledge with real-time retrieval, pulling current web content to inform their answers. Research on LLM-based search engines found these systems show concentrated distribution in which sources they pull from, favoring domains with higher credibility signals over brand-owned content or social media.

The hallucination problem is a third layer. Even with good training data and strong retrieval, AI models sometimes generate confidently wrong information. A 2025 Forbes analysis found false information rates of 47% for Perplexity and 40% for ChatGPT across tested queries. Brand-specific queries, the kind a prospect might run about your company, carry this same risk. The models aren't lying. They're making probabilistic guesses based on the information patterns they were trained on.

The result: what AI says about your brand is a function of what the broader information ecosystem has said about your brand. Your own website, social channels, and marketing copy have far less weight than you'd expect.

Three categories of wrong AI information

Not all AI brand errors are the same type, and the correction timeline depends on which category you're dealing with.

The first category is categorical misclassification. The model puts you in the wrong market segment, the wrong buyer vertical, or the wrong product category. This is often the hardest to fix because the model has learned a confident answer and needs to be exposed to consistent competing signals before it updates. The Pernod Ricard example, where Ballantine's Scotch was miscategorized as prestige rather than mass-market, is a clear instance. The AI had learned enough about the brand to associate it with Scotch whisky, but the positioning signals that reached it were insufficient to correctly classify the price tier.

The second category is factual gaps. The model knows something about your company but not enough to answer accurately. It fills the gap with statistically plausible guesses that happen to be wrong. A company that raised a Series B in 2023 but hasn't generated much third-party editorial coverage may find AI models confidently stating it's a "small startup" or assigning it to a wrong founding year. These gaps are faster to correct because the model isn't overwriting a strong wrong signal, it's filling a void.

The third category is reputational drift. The model's description of your brand reflects coverage from several years ago rather than your current positioning. This happens when a company has pivoted, shifted markets, or rebranded but the third-party editorial record hasn't caught up. The old information sits in training data. The new positioning exists primarily in owned channels, which the model structurally discounts. The fix is creating a sufficient volume of accurately framed recent editorial coverage to shift which description the model considers current.

Identifying which category applies to your situation is the first step. It changes both the urgency and the approach.

What you can and cannot control

This is where most executives waste time. There are things that feel like they should work and don't.

You cannot submit corrections directly to AI companies. OpenAI, Anthropic, Google, and Perplexity don't accept brand fact-sheets or correction requests in any way that changes model outputs at scale. The models are trained periodically, not updated in real time based on individual feedback. Flagging a wrong answer in ChatGPT may affect the response for the next prompt in that session. It does nothing for the next hundred thousand prospects who ask the same question.

You cannot fix this with your own content alone. Research from the paper "Generative Engine Optimization" found that AI search exhibits a systematic and overwhelming bias toward earned media, third-party authoritative sources, over brand-owned and social content. That's a stark contrast to Google's more balanced mix of owned and third-party signals. If your brand narrative exists primarily on your own site, LinkedIn, and a few press releases, the AI systems are structurally discounting most of it.

You cannot accelerate this with paid content or sponsored articles. Placement velocity without source authority doesn't move the needle. The question is not how many placements you have. It's which publications those placements are in and whether AI engines treat those publications as authoritative sources.

What you can control: the editorial record that exists about your brand in publications that AI systems treat as credible. That's a narrower target than most brand executives expect, and it's also a more defensible one once you've built it.

The mechanism that changes what AI says about you

AI systems are not neutral. They have strong preferences about which sources to trust. Research analyzing LLM source preferences found that these systems consistently prefer institutionally corroborated information, government sources and newspaper sources, over information from individuals or social media. The bias isn't subtle. It's baked into how the models were trained and how retrieval systems weight sources.

A separate study on citations in LLM search engines found that citations meaningfully increase user trust in AI-generated responses, which means the AI systems that provide cited answers have strong incentive to cite authoritative sources. The publications that make it into citations are the same ones that have been authoritative for human readers for decades: major business publications, industry trades with editorial standards, and institutional sources.

This creates the mechanism. When your brand is placed in a publication that AI engines treat as authoritative, that placement becomes part of the information base the model draws from. It doesn't happen instantly. It doesn't happen from a single article. But it happens, and it's the only pathway that reliably shifts what AI says about your company over time.

A 2026 WSJ analysis found that LLM brand conversations went from roughly 1 in 10 to 9 in 10 in just a few months as AI search adoption accelerated. The brands that had built editorial presence in trusted publications before that shift were already in the information base when AI engines started making recommendations at scale. The ones that hadn't were absent from the conversation, or, worse, represented by outdated or inaccurate information from years prior.

This is not theoretical. Data from a 2025 Forbes Business Council analysis found that AI search traffic accounted for just 0.6% of clicks but 12% of inbound revenue, a disparity that reveals how qualified and conversion-ready AI-referred traffic is. The brands capturing that revenue have something in common: presence in the publications that AI engines cite when generating answers.

Which publications change AI's answer about your brand

Not all publications are equal in AI's view. The same research that examined how LLMs assess news credibility found these systems have developed complex frameworks for evaluating source reliability, frameworks that favor publications with long track records of factual accuracy, clear authorship, and institutional standing.

The publications that reliably move the needle are the ones that have been shaping professional opinion in your industry for years. For B2B companies, that means major business publications (Forbes, Bloomberg, WSJ, Business Insider), industry-specific trades with editorial standards, and vertical-specific publications that cover your category with named journalists and editorial oversight. Press releases syndicated through wire services don't carry the same weight. They're often excluded from the trusted source set or downweighted significantly.

The publication threshold matters more than placement volume. One article in a publication that AI engines trust produces more signal than ten articles in publications outside the trusted set. This is different from the traditional PR model, where hitting a certain volume of placements, regardless of where they land, was considered success.

Forrester's guide to answer engine optimization found that answer engine crawlers are more active and less forgiving than traditional search engine crawlers, which means the content in trusted publications is actively indexed and weighted for retrieval. Getting into a Forbes article isn't just a brand moment. It's creating a crawlable, indexed, AI-retrievable record of what your brand does and who it serves.

For companies trying to correct a specific wrong answer, the publication choice should be tied to the category of error. If AI is misclassifying your market segment, the most effective placements are in publications that cover your actual category, where your correct positioning gets associated with authoritative third-party framing. If the error is reputational drift, newer coverage in current-event publications updates the temporal signal the model is working with.

What the editorial record needs to say

This is where most companies make their second mistake. They secure the placement. The article goes live. Three months later, ChatGPT still gets them wrong.

The editorial record has to be accurate, specific, and consistent across multiple placements. AI systems synthesize across sources. If one article says you serve enterprise SaaS companies and another refers to you as a startup tool, the model is working with conflicting signals and will sometimes land on the wrong one.

Specificity matters more than coverage volume. An article that precisely describes your category, your customer profile, and your differentiation gives AI much more to work with than an article that mentions your company name alongside a broad industry trend. The former creates a clear profile. The latter adds noise.

Consistency across placements creates the pattern AI systems are looking for. When multiple authoritative sources describe your company the same way, the model gains confidence. The correction happens because there's now a clear signal, not because you submitted a correction form.

This is also why fixing brand sentiment in AI search requires more than a reactive PR push. A crisis response that places ten articles in ten days, all in medium-authority publications, won't shift the underlying signal. What shifts the signal is systematic editorial presence in publications that AI engines have long treated as reliable, built consistently, not deployed reactively.

There's also a frequency dimension. The 2026 AEO Provider Benchmark found average brand-mention inclusion rates of 79.1% across AI search surfaces for companies that had established citation-based editorial presence, and 95.8% in citation-enabled surfaces specifically. Reaching those inclusion rates requires ongoing editorial activity across many months, not a one-time placement. The companies at those numbers have made earned media a consistent operational function.

How long the correction takes

Honest answer: months, not days. And the timeline depends on how wrong the current information is and how thin your existing editorial record is.

If AI is describing your company in a fundamentally wrong category, correction takes longer because the competing signal is stronger. The models have learned a confident wrong answer and need to learn a more consistent right one through repeated exposure to accurate sources. The research on real-time LLM error correction found that even expert-validated frameworks required iterative exposure across sources before reaching the 89% satisfactory output rate that domain experts considered reliable. The parallel for brand correction is similar: one corrective placement isn't enough; you need enough that the accurate description becomes the dominant signal.

If AI has gaps rather than errors, the timeline is shorter. You're filling a vacuum rather than overwriting a confident wrong answer. A concentrated editorial push over 60 to 90 days in the right publications can meaningfully shift how AI engines describe you in queries.

Companies that have been operating for several years without a consistent PR program often find AI's description of them is frozen at some earlier state, usually whenever they last generated meaningful third-party coverage. That backlog requires more sustained effort to update than a brand that has maintained consistent editorial presence throughout its growth. The correction is possible, but setting realistic timelines with leadership matters. Expecting to fix a five-year gap in editorial coverage in four weeks is how you get into a cycle of trying things and concluding they don't work, when the real issue is insufficient time and volume.

What not to do

Several approaches look logical but don't work.

Publishing more content to your own blog won't fix it. Not because owned content has no value, but because AI systems structurally discount it relative to third-party editorial sources. Your blog can support an editorial strategy. It can't replace one.

Trying to edit Wikipedia entries about your company is a short path to having those edits reverted and your company flagged for conflict-of-interest editing. Wikipedia has strict policies about company self-promotion. More practically, Wikipedia alone is insufficient. It's one source among many in an AI system's view of your brand, and it's only useful if the Wikipedia entry itself cites other authoritative sources that support accurate claims.

Pushing for sheer volume of press releases or wire-distributed content creates noise without authority. Forbes noted that earned media counts more in the AI era, not less. The keyword is "earned." Content that's distributed rather than editorially placed occupies a different tier in how AI engines weight it.

Cold pitching journalists at scale to generate coverage quickly runs into the same structural problem: the inbox competition from every other brand chasing AI-era coverage is fierce. More pitches, more noise, lower response rates from the journalists at the publications that actually matter. The volume strategy is counterproductive in the current environment.

The company that tries all of these things and still finds AI getting them wrong is usually facing the same underlying problem: they haven't built the right editorial relationships to access the publications that matter. The placements they're getting are in the wrong tier of publication. The mechanism is correct in theory but the execution is landing in the wrong places.

The pattern underneath the fix

What makes the correction mechanism work is not complicated once you see it clearly. AI systems are, at their core, systems for synthesizing what authoritative sources have said. The publications that determined human brand perception for decades are the same publications that AI engines treat as authoritative. The mechanism that made PR valuable, earned media in trusted publications, is the mechanism that determines what AI says about your brand.

This is what Machine Relations names as a discipline: the systematic work of ensuring your brand appears correctly and favorably in the sources AI systems trust. Not through optimization tricks or proprietary algorithms. Through the same editorial relationships and earned media placements that have always driven credibility, now applied with explicit attention to how AI engines read those publications.

PR got one thing right when it built the earned media model: third-party credibility in respected publications is the most durable trust signal available. That's still true. What's changed is that the readers are no longer only human. The same placement that builds credibility with a CFO reading Bloomberg over breakfast is the citation an AI system pulls when a prospect asks which companies lead your category. The mechanism is identical. The audience expanded.

For a company that discovers AI has been getting its brand wrong, the question is how quickly they can build a citation base in the publications that matter. If you've done any legitimate PR work, you're not starting from zero. The existing editorial record, even a thin one, gives AI a starting point. The gap is between that starting point and the consistent, accurate, multi-source record that creates confident answers.

The brands that emerge from this period with clean AI narratives will be the ones that treated earned media as infrastructure rather than campaign spend. Before the correction can happen, you need to understand exactly what AI is saying about your brand across different platforms. That audit is where every correction effort should start.

[Start your visibility audit →](https://app.authoritytech.io/visibility-audit)

Frequently asked questions

Can I contact OpenAI or Google directly to correct wrong information about my company?

You can use feedback mechanisms in ChatGPT and Google's AI systems to flag wrong answers, but this doesn't change what the models say at scale. Feedback mechanisms affect individual sessions or contribute marginally to model fine-tuning over time. They don't update the underlying information base. The structural fix requires changing the editorial record that the models pull from.

Does Google's Knowledge Graph affect what AI says about my brand?

It's one signal among many, not a primary one. Some AI systems pull from Knowledge Graph data for factual retrieval, but they also draw from a much broader range of sources including web retrieval, training data, and editorial content. Correcting your Knowledge Graph entry is worth doing as part of a broader information hygiene effort, but it won't fix wrong AI answers on its own.

How do I know if the editorial placements I'm securing are actually changing AI responses?

Run regular prompts across ChatGPT, Perplexity, Gemini, and Claude, asking each to describe your company, your category, and your competitive position. Treat those answers as a baseline. After a sustained editorial push (60 to 90 days minimum), run the same prompts again and compare. Changes in how consistently and accurately the models describe you are the signal to track. The visibility audit at authoritytech.io includes this kind of baseline measurement.

Does social media presence affect what AI says about my brand?

Less than most brands expect. Research consistently shows AI systems deprioritize social media relative to institutional sources and editorial content. A strong social presence supports brand awareness with human audiences. For changing what AI says about you, it's a weaker signal compared to earned editorial placements in trusted publications.

What if wrong information about our brand has already been published in a legitimate publication?

This is the hardest scenario. If a credible publication has published factually wrong information and that information has made it into AI training data or retrieval, the wrong signal is now weighted by source authority. The fix requires building a larger body of accurate coverage in similarly authoritative sources to create a competing signal. In some cases, it's worth reaching out to the publication to request a correction, which then gets indexed as an accurate version of the record. This takes longer but is the right approach.

How many placements do we need before AI changes its answers?

There's no universal number because it depends on the strength of the competing wrong signal and the authority of the publications you're placing in. As a practical benchmark, companies that reach consistent AI brand inclusion see it happen after sustained editorial activity in three to five authoritative publications over 60 to 90 days. Single placements in high-authority publications move the needle more than many placements in lower-tier ones. The quality threshold matters more than the count.

Related Reading