Afternoon BriefGEO / AEO

The 20-Minute Audit That Shows Which Publications Drive AI Citations in Your Category

Most B2B marketing teams can't tell which publications are generating AI citations for their brand. They're spending PR budget finding out the hard way. Here's the audit that changes that in 20 minutes.

Christian Lehman|
The 20-Minute Audit That Shows Which Publications Drive AI Citations in Your Category

According to a December 2025 survey of 400 US and UK B2B tech marketing leaders by 3Thinkrs, reported by eMarketer, 42% of CMOs say declining traditional search performance is already forcing them to adapt. Not planning to. Already.

The budget followed. US enterprises directed 12% of their digital marketing spend toward generative engine optimization in 2025. A Conductor survey of 250+ digital marketing leaders found 94% planned to increase that share in 2026.

Where the money is going matters as much as how much is moving.

Most GEO investment is flowing into on-site content, better structure, cleaner formatting, schema markup, content written to answer specific questions. These are legitimate improvements. They also optimize for the roughly 5% of AI citation activity that comes from a brand's owned content.

The other 95% comes from third-party sources: news coverage, editorial placements, analyst reports, reviews. MuckRack's research into what AI systems actually read found that journalism and editorial coverage account for the dominant share of what AI engines pull from when generating answers. The publications where that coverage lives determine whether you get cited. And most marketing teams have no systematic way to know which publications those are for their specific category.

That's the gap this piece closes.


Why publication selection matters more than placement volume

A common error in PR briefs: measuring success by placement count. Five placements this month, twelve last quarter. The PR firm reports the numbers, the marketing team marks it done, nobody asks which publications show up when a prospect asks ChatGPT who leads the category.

Placement volume and AI citation value are not correlated. A single Forbes or TechCrunch placement in your category can generate AI citations for 12 months or longer. A spread of 20 trade publications might generate zero, because most trade publications don't get cited for most buyer queries.

Microsoft's updated AI Marketer's Guide from February 2026 describes how AI surfaces brands organically through three stages: trained baseline knowledge, retrieved web content, and structured first-party signals. The retrieved web content layer draws from a small set of publications with established authority in each domain. Being in those publications is what earns consistent AI citation. Being in the others matters less than most teams assume.

The audit below tells you, specifically, which publications are doing the work for your category right now.


The 4-step publication audit

This takes 20 minutes the first time. Once you run it, it changes how you brief your next PR push.

Step 1: Build your category's question list

Write down the 5 questions a prospect in your category is most likely to ask an AI tool before contacting a vendor. These should sound like real research queries, not branded searches. For a B2B SaaS company in project management:

  • "What are the best project management tools for remote engineering teams?"
  • "Which project management platforms integrate with Jira?"
  • "How do 50–200 person companies handle sprint planning?"

Make them specific. Vague questions return generic citation lists. You want the questions a buyer who is 60% of the way through a purchase decision would actually type.

Step 2: Run the prompts and record the sources

Run each question in both ChatGPT and Perplexity. Note which publications appear in the citations or sources panel. You're not looking at which brands get mentioned, you're building a list of which publications the AI pulled from to generate each answer.

Some publications will appear across multiple answers. Those are your tier-1 citation sources for this category, the ones AI engines have determined are authoritative for these questions. You should leave this step with a list of 10–15 source URLs.

Step 3: Cross-reference against your existing coverage

Compare the publication list from step 2 against where you've earned coverage in the last 12 months. The overlap tells you which of your past placements are contributing to AI citation. The gaps, publications appearing in AI answers that you haven't been featured in, are your new target list.

That gap list is more valuable than most PR briefs currently treat it.

Step 4: Brief accordingly

Most PR briefs include a target audience and a story angle. Add a third column: target publications, ranked by how frequently they appeared in step 2. The publications that showed up in 4 or 5 of your category queries belong at the top of the list. Tell whoever owns your PR brief why, these specific outlets are driving AI citations when your buyer researches the category.

If your current PR partner can't reliably place you in those publications, the audit just told you something important about the gap between placement volume and placement value.


The mistake that makes the audit irrelevant

Running the audit and then continuing to brief toward volume produces the same results as before. The audit is only useful if it changes the conversation.

That conversation is usually uncomfortable because it reframes what success means. "We got eight placements last quarter" sounds like results until the question becomes: which of those eight show up when your buyer researches the category in ChatGPT?

This is part of why understanding how Perplexity selects sources matters beyond SEO. Perplexity and ChatGPT have their own version of a trusted sources list per domain, and it's based on signals that go beyond page authority, recency, editorial credibility, the presence of named sources, how often other cited publications reference the same outlet. Tier-1 coverage compounds in AI citation in ways that trade publication volume does not.

The Muck Rack State of Journalism report found that journalists now receive 40 to 50 pitches per day on average. At tier-1 publications, the inbox situation is considerably worse. The mass-pitch PR approach that generates high placement volume is getting harder to execute at exactly the moment when targeted, relationship-driven tier-1 coverage would generate the highest AI citation return. The teams winning in AI citation are concentrating placements, not distributing them.


This is an infrastructure decision

The audit looks tactical. Run a few prompts, check the URLs, update a brief. The execution is fast.

What it's actually deciding is whether your brand gets cited when your prospects do AI-assisted research, which a growing share of enterprise buyers now do before they contact a vendor. The eMarketer/3Thinkrs data showing 42% of CMOs already feeling traditional search's decline is measuring a behavioral shift that's already inside your buyer's research process, not one that's coming.

That's what Machine Relations is about at the infrastructure level: earned media in publications that AI engines trust, generating citations that appear when buyers are deciding who to consider. The mechanism isn't ad spend or on-page optimization, it's the same third-party editorial credibility that made earned media effective with human readers, now applied to machine readers doing the same research.

The publications that generate those citations aren't random. They're identifiable with the audit above. The question is whether your next PR brief is targeted at them, or at volume.

Run the audit first. Then write the brief.

Related Reading


Get a full AI citation audit for your brand at app.authoritytech.io/visibility-audit