How to Get Cited in Claude AI Answers
Machine Relations

How to Get Cited in Claude AI Answers

Claude has overtaken ChatGPT as the top AI app. Learn how Claude selects the brands and sources it cites, and what earned media strategy actually gets your company into its answers.

In the first week of March 2026, Anthropic's servers went down. The reason: Bloomberg reported Claude was experiencing "unprecedented demand," with the company saying it had been breaking daily signup records in every country where the service is available. By that point, Claude had already overtaken ChatGPT in the U.S. App Store. The Verge confirmed AppFigures data showing Claude topping free app charts across AI categories.

This matters for your business in a specific, practical way. The tool your prospects are now using to research vendors, evaluate categories, and shortlist companies before making contact is not Google. For a growing segment of buyers in B2B specifically, it is Claude. Harvard Business Review's March 2026 issue documented Gokcen Karaca, head of digital and design at Pernod Ricard, discovering that two-thirds of Gen Z and more than half of Millennials had started using LLMs to research products. Karaca partnered with digital agency Jellyfish to audit what the leading AI models said about his brands. What they found disturbed him: LLM data was incomplete, and in some cases incorrect, with one model miscategorizing a mass-market whisky as a prestige product.

If a company the size of Pernod Ricard could be misdescribed in AI answers, your company almost certainly has gaps too. The question is: what actually determines whether Claude cites your brand accurately, and how do you get into those answers in the first place?

The answer has nothing to do with technical SEO or social media activity. It comes down to a specific signal: earned media placements in publications that Claude's training data already treats as authoritative. Understanding that mechanism, and building toward it systematically, is the difference between showing up in Claude's answers and being invisible during the exact moment a prospect is deciding who to call.

Key takeaways

  • Claude's training data is built from a general-purpose web crawl of publicly available content. The publications that trained Claude's understanding of your category are the same tier-1 editorial outlets that shaped human brand perception for decades.
  • Research from arXiv shows LLMs systematically amplify existing citation networks, reinforcing brands that already appear in authoritative sources and making it harder for brands without editorial presence to enter AI answers organically.
  • A 2026 Deloitte analysis published in the Wall Street Journal identified earned media as "an important source for LLMs," the first major institutional confirmation of what practitioners have observed empirically.
  • A YouGov survey of 1,000 U.S. consumers conducted by Jellyfish found that 66% of Gen Z and 51% of 25-to-34-year-olds now use AI models for brand, product, and service recommendations.
  • Technical SEO, keyword-optimized website copy, and press release wire distributions do not move Claude's citation behavior. Earned placements in publications Claude's training data trusts do.
  • Measuring where your brand currently stands in Claude requires testing, not assumptions. A structured audit of how Claude responds to your category queries is the starting point.

How Claude decides what to surface

There is no algorithm you can reverse-engineer in the traditional SEO sense. Claude does not have a ranking system you can query. But the mechanism behind what Claude knows, and therefore what it cites, is documented.

According to Anthropic's transparency report compiled by Stanford's Center for Research on Foundation Models, Claude models are "trained on a proprietary mix of publicly available information on the Internet as of March 2025, as well as non-public data from third parties, data provided by data-labeling services and paid contractors." To obtain that public web data, Anthropic "operates a general-purpose web crawler" that follows standard industry practices around robots.txt.

What this tells you: Claude's knowledge base is, at its core, a representation of what the public web said about your industry, your category, and your brand up to the training cutoff. If the authoritative publications in your space have written about you (Forbes, TechCrunch, Harvard Business Review, Bloomberg), that content is likely in Claude's training data. If they haven't, or if what they've written is minimal, Claude's model of your brand is thin by construction.

The second mechanism is what researchers call the Matthew effect in LLM citation behavior. A 2025 arXiv paper, "Large Language Models Reflect Human Citation Patterns with a Heightened Citation Bias," found that LLMs don't just reflect existing citation networks. They amplify them. Brands with strong existing editorial presence get cited more frequently in AI answers. Brands without that presence get cited less, or not at all. The gap compounds over time rather than self-correcting.

A separate arXiv paper on source-aware training and knowledge attribution in language models found that LLMs acquire knowledge during pretraining in ways tied to the sources those facts came from. High-authority sources, meaning publications with established editorial credibility, contribute more durable, more frequently-recalled knowledge to the model's parameters than low-authority sources.

Put those pieces together: Claude was trained on a web crawl. That crawl skewed toward high-authority publications. Those publications' content is more durably encoded in Claude's parameters. And the Matthew effect means that brands already present in those publications get surfaced more often in Claude's answers. The rich get richer.

Claude's behavior has one more layer worth understanding. In January 2026, Anthropic published an updated "Claude's Constitution," a 57-page document detailing the values and reasoning frameworks built into Claude's training. Claude is trained to favor helpfulness, honesty, and accuracy. When answering a question about vendor options, Claude is not trying to be promotional or arbitrary. It is trying to give the user the most accurate, credible answer it can construct from its training data and any real-time retrieval it has access to. Brands that appear in credible sources are simply easier for Claude to recommend without introducing error.

The gap between your brand and Claude's answer

The Pernod Ricard finding from Harvard Business Review is worth sitting with. This is a company with decades of marketing investment, global distribution, and professional communications staff. Their brand was still being misdescribed by a leading AI model. If that can happen at their scale, the default assumption for most B2B brands should be that their AI visibility is worse than they think.

The Jellyfish research gives you the demand side of this equation. Their YouGov survey of 1,000 U.S. consumers found that 66% of Gen Z are using AI tools for brand recommendations, and those users expect AI to guide them to the best options. Half of the 18-to-24-year-old cohort expect AI tools to surface the right brand for their needs. Among 25-to-34-year-olds, that expectation sits at 47%. These aren't passive research habits. These are buying decisions.

On the B2B side, Forrester's 2026 B2B predictions report found that 61% of purchase influencers say their organization has or will use a private generative AI engine to support purchasing decisions. McKinsey's analysis on agentic AI projected that AI would power more than 60% of sales-related actions by the time agentic AI reaches operational maturity. These are not distant projections. They describe what is already happening in enterprise buying cycles.

The practical implication: when a prospect at a Series B SaaS company types "who are the best PR firms for AI visibility" into Claude, the answer Claude generates is constructed from its training data about your category. If you're not in that answer, you didn't lose the deal. You were never in the consideration set.

The timing also matters. Anthropic launched Claude Marketplace on March 7, 2026, giving enterprises access to Claude-powered tools from Replit, GitLab, Harvey, and others. As Claude embeds deeper into enterprise workflows, the frequency with which business decisions run through Claude's answers will only increase. The brands visible in Claude's training data and retrieval today are building an advantage that compounds as adoption grows.

The signal that actually moves Claude

If Claude's knowledge comes from a web crawl, and that web crawl weighted high-authority publications more heavily, then the signal that gets you into Claude's training data is placement in those publications.

A January 2026 Deloitte analysis published in the Wall Street Journal put it directly: companies "may place greater emphasis on communications, given that earned media appears to be an important source for LLMs." This is a significant institutional confirmation, but it is also an understatement. Earned media is not just an important source. It is the primary source for Claude's brand knowledge.

The reason is structural. Claude's training crawl indexed publicly available content. Most of what is publicly available about any given brand comes from three places: the brand's own website, social media posts, and editorial coverage in third-party publications. Claude's training process, which weights for authority and accuracy, will encode the third-party editorial coverage more durably than owned content from the brand itself. Owned content carries an obvious bias signal. Editorial coverage in a credible outlet carries third-party validation.

This is where the mechanism becomes specific. The publications that have shaped editorial credibility for decades (Forbes, TechCrunch, Harvard Business Review, Bloomberg, Reuters, the Wall Street Journal) are the same publications that Anthropic's training crawl treated as high-authority sources. When Claude is asked about your category, the brands that appear in those publications are the brands Claude has the most reliable, authoritative, frequently-encoded information about. The brands that don't appear there are the ones Claude either skips, describes with uncertainty, or gets wrong.

The arXiv paper on LLM attribution behavior found that web-enabled LLMs frequently answer queries without fully crediting the sources they consume, creating what the authors call an "attribution gap." But the underlying point matters for brand strategy: what gets surfaced in AI answers is downstream of what those models indexed during training and retrieval. Publication presence is the input. AI citation is the output.

Which publications actually move the needle

Not all coverage is equal. A press release on a wire service does not carry the same training weight as a Forbes article. A guest post on a marketing blog with low editorial authority does not move Claude the same way a TechCrunch feature does. The distinction reflects how Claude's training data was curated.

The publications that matter for Claude citation share specific characteristics: they have established editorial independence (reporters who reject pitches, not just accept them), they have been consistently crawled by major web crawlers for years (meaning their content is well-represented in training data at multiple points in time), and they are treated as authoritative sources by other authoritative sources. That last point connects directly to the citation network amplification that the arXiv research documented.

For B2B brands in AI, tech, and growth categories, the publications that tend to move LLM citation include Forbes, TechCrunch, VentureBeat, Bloomberg, Harvard Business Review, Fast Company, Business Insider, and Wired. Vertical coverage matters too: for fintech, Finextra and American Banker; for healthcare, STAT News and Health Affairs; for cybersecurity, Dark Reading and SC Magazine. The pattern is consistent across verticals: publications with genuine editorial standards and long track records of indexing get weighted more heavily in LLM training data than newer outlets or those with low standards for what they publish.

The editorial standards at these outlets are not incidental. They are the mechanism. A placement that required a real pitch, editorial review, and a journalist's name attached to it carries the provenance signal that Claude's training process weighted for. A placement that was paid for or required no editorial review does not carry that signal and should not be expected to drive Claude citation.

One additional factor: recency matters for Claude's real-time retrieval capabilities. Claude's base training has a cutoff, but the model also has web retrieval features in some configurations. Fresh, recent coverage in authoritative outlets, particularly when it reflects a specific, factual claim about your company, is more likely to appear in real-time retrieval than older content, regardless of how credible the source. This creates a compounding advantage for brands that maintain ongoing editorial activity rather than treating placements as one-time events.

What does not work

Given how Claude's training and retrieval work, several commonly-attempted tactics will not get your brand into Claude's answers.

Keyword-optimizing your website for AI search. Your website's structured data, schema markup, or keyword density does not influence what Claude learned during pretraining. Claude is not a search engine that indexes your site on demand. For Claude's knowledge of your brand, what matters is what authoritative third-party sources said about you, not what you said about yourself.

Social media posting volume. Twitter/X posts, LinkedIn content, and social engagement do not carry the editorial authority signal that Claude's training weighted for. Social content is in the training data, but it is not weighted the same way peer-reviewed or editorially-vetted content is. High social activity does not compensate for thin editorial coverage.

Press releases distributed via wire services. PR Newswire, Business Wire, and similar services distribute content that gets syndicated across news aggregators. Claude's training data likely included some of this content, but syndicated press releases carry a known bias signal. They are branded content, not independent editorial coverage. They do not deliver the provenance that drives AI citation.

AI SEO tools that promise to optimize for LLMs. These tools typically focus on structured data formats, answer-box optimization, and content structure. Those elements matter for retrieval-augmented generation in some search contexts, but they do not change what Claude learned during pretraining about your brand's credibility and category position. Optimizing your website's structure while ignoring editorial presence is rearranging deck chairs.

Publishing content on your own blog. Owned content carries the bias signal of its origin. Your company blog, regardless of content quality, does not carry the same authority signal in Claude's training data that a TechCrunch article about your company does. Owned content has value for SEO, audience nurturing, and demonstrating expertise, but it is not the mechanism for Claude citation.

The earned media playbook for Claude visibility

Getting cited in Claude's answers is a publication strategy, not a technical strategy. The work breaks into three parts.

Part 1: Claim your category position in tier-1 outlets

Before any other work, you need a clear editorial presence in at least two or three tier-1 publications that cover your category. This means a genuine feature, a company profile with attribution to a named reporter, or an executive byline in an outlet with real editorial standards. The goal is to create anchor content: articles that establish your brand's definition, category position, and core claims in sources that Claude treats as authoritative.

This is not a one-off effort. Claude's amplification of existing citation networks means that once your brand has a credible editorial record, that record accumulates weight over time. The first placement is the hardest. The fifth placement is easier than the first because editorial credibility compounds. Reporters cite other reporters, and Claude's citation behavior mirrors that compounding.

Part 2: Build frequency across the citation network

A single article in a single publication creates a thin data point. Claude's training data spans billions of pages. One article is unlikely to create a durable impression on its own. What creates durable Claude visibility is a pattern of coverage across multiple credible publications over time.

The target is not volume for its own sake. A cluster of five to ten genuine editorial placements in credible outlets, each covering a different angle of your company's story, expertise, or category position, creates the kind of cross-referenced, multi-source credibility that LLMs weight heavily. When Claude encounters your brand name and can cross-reference it across multiple authoritative sources, the model becomes more confident in recommending you. When Claude can only find you in one place, or in sources it treats as low-authority, it hedges or ignores you.

Part 3: Make your core claim specific and repeatable

What do you want Claude to say about your brand? That answer needs to be specific before you pursue placements to support it. "We're a great marketing platform" is not a citable claim. "AuthorityTech is the company that coined Machine Relations and guarantees earned media placements in tier-1 publications on an outcome-based model" is a citable claim. It is factual, specific, and distinctive enough to appear in Claude's answers without being confused with a competitor.

The specific claims your editorial coverage reinforces should be consistent across placements. Claude's training data will encode the pattern. The same claim appearing across multiple credible sources becomes the most confident fact Claude has about your brand. Inconsistent messaging across placements creates noise rather than authority.

Measuring your Claude visibility

The starting point is a structured audit of how Claude currently responds to queries about your category and brand. This is simpler than it sounds and does not require specialized tools.

Run the following types of queries directly in Claude and document the responses:

  • "Who are the leading [category] companies for [use case]?" This tests whether your brand appears in category-level recommendations.
  • "What do you know about [your brand name]?" This surfaces what Claude's training data actually encoded about your brand, including any gaps or inaccuracies.
  • "Compare [your brand] and [competitor brand]." This tests the depth and accuracy of Claude's knowledge relative to a known competitor.
  • "What is the best option for [problem your product solves]?" This tests whether your brand appears in solution-oriented recommendations.

Document the responses. Note where your brand appears, where it doesn't, and where the information is incorrect. That audit is your baseline. It tells you whether you have an awareness gap (not appearing at all), an accuracy gap (appearing but described incorrectly), or a positioning gap (appearing but not in the right context for your ICP).

Our post on how to monitor what AI says about your brand covers the monitoring layer: what to track, how often, and which queries reveal the most signal. For a broader view of how your Claude visibility compares to competitors across multiple AI engines, the AI share of voice guide gives you the measurement framework.

Repeat the audit quarterly at minimum. Claude's real-time retrieval means that fresh editorial coverage can shift your position relatively quickly. The baseline audit tells you where you started. Quarterly re-runs tell you whether the editorial work you're doing is actually changing what Claude says about your brand.

The mechanism behind the mechanism

Everything described above (the training data, the authority weighting, the Matthew effect, the publication targeting) is downstream of one fact the AI era has brought into sharp relief: AI engines decide what to cite using the same signal that determines editorial credibility with humans. Earned media placements in publications that AI engines treat as authoritative.

That is not a coincidence. Claude was trained on the same publication ecosystem that shaped human brand perception for decades. The Wall Street Journal was authoritative before Claude existed. TechCrunch covered tech brands before Claude existed. Harvard Business Review published management research before Claude existed. When Anthropic's training crawl indexed these publications, it encoded their editorial judgments into Claude's parameters. The brands those publications covered credibly became the brands Claude knows credibly.

This is what Machine Relations names as a discipline: ensuring your brand is cited, surfaced, and recommended by AI systems rather than buried by them. PR built brand authority with human readers through editorial relationships and earned media. Machine Relations does the same thing for machine readers. The mechanism is identical. What changed is the reader.

The companies that will own their category in Claude's answers three years from now are not the ones that figured out the right schema markup or the right prompt structure. They are the ones that built a real editorial record in publications that have been shaping credibility for decades and that AI engines already trust.

This is exactly what PR was always supposed to do. Not chase placements to fill a report. Not spray pitches at every journalist with a relevant beat. Build genuine editorial relationships in the publications that matter, place your brand's most important claims with credible reporters who will cover them accurately, and let the credibility compound. That model worked when your buyers were human. It works now that AI systems are doing the first pass of research on your behalf.

The difference is that the consequence of not doing it has never been more concrete. Fifteen years ago, being absent from a Forbes article meant missing one distribution channel. Today, being absent from Forbes means being less likely to appear in Claude's answers when a prospect is deciding who to shortlist.

FAQ

Does getting cited in Claude require a different strategy than Google?

The tactics differ, but the foundation is the same: earned media in credible publications. Google's ranking algorithm and Claude's citation behavior both weight for editorial authority, third-party credibility, and the quality of the sources that reference your brand. Where they diverge: Google can be influenced more directly by on-page optimization, backlink building, and technical SEO. Claude's citation behavior is more heavily weighted toward the provenance and authority of training data sources. Optimizing for Google can help with Claude to the extent that Google-authority correlates with training data authority, but an SEO strategy alone does not translate to Claude visibility.

How quickly does new editorial coverage affect what Claude says about my brand?

For Claude's base model, training cutoffs mean recent coverage only affects behavior after a new training run. But Claude has real-time web retrieval in some configurations, which means very recent, high-authority coverage can affect Claude's answers within days of publication. The practical answer: don't expect your next Forbes placement to change Claude's answers overnight, but expect it to contribute to a pattern of coverage that shapes Claude's model of your brand over time. How Perplexity selects sources is a useful companion on this question. The mechanisms are different but the publication authority principles overlap significantly.

Is there any way to directly request that Anthropic update what Claude says about my brand?

Anthropic does not offer a channel for brands to directly influence what Claude says about them between training runs. The mechanism for changing your brand's representation in Claude is the same as it always has been: change what authoritative sources say about your brand. If a major publication publishes inaccurate information about your company, the appropriate step is to request a correction through that publication's editorial process. That correction, once published and indexed, will eventually influence Claude's training data. There is no shortcut to Claude's parameters that bypasses the editorial ecosystem, which is also why the editorial ecosystem remains the most defensible position in the AI era.

Start with the audit

If you don't know what Claude currently says about your brand, that's the only place to start. Run the queries above. Document what you find. The gap between what Claude says and what you want Claude to say is your editorial roadmap.

If you want to understand exactly where your brand stands across the major AI engines and what placements would move the needle most, the visibility audit maps your current position and identifies the publication gaps that matter most. Start your visibility audit →

Related Reading