Afternoon BriefAI Search & Discovery

Your PR Team Got You Placements. ChatGPT Still Doesn't Recommend You. Here's the Brief They Need.

Great placements that don't show up in ChatGPT answers have a specific cause: the brief was wrong. Here's the exact three-change fix that operators can give their PR teams this week.

Christian Lehman|
Your PR Team Got You Placements. ChatGPT Still Doesn't Recommend You. Here's the Brief They Need.

You've been in Forbes. You've been in TechCrunch. Your PR team sent the coverage report and the numbers looked good. Then you asked ChatGPT who leads your category. Your name wasn't in the answer.

This isn't a PR problem. It's a briefing problem.

The placements that feed AI citations are not the same as the placements that impress a board deck. A placement optimized for impressions and domain authority can do almost nothing for your AI visibility. And right now, most PR teams are still briefing for impressions.

A September 2025 empirical study published on arXiv analyzed 1,702 citations across 70 prompts on Brave, Google AI Overviews, and Perplexity. The finding was direct: AI search engines systematically favor earned media from third-party, authoritative domains over brand-owned content, and they nearly exclude social media entirely. The citation source isn't your blog. It isn't your LinkedIn. It's the coverage that other publications generate about you in print, in named sources, with facts that trace. The data on earned media dominating AI search has been consistent across studies — what's still missing is the brief that actually operationalizes it.

That part most operators now know. What they're getting wrong is the brief behind it.

What your current briefing optimizes for

Ask your PR agency what they're measuring and you'll hear: domain authority, potential reach, estimated impressions, backlink value. Those metrics reflect what placements used to be for — human readers, awareness, SEO value.

None of them tell you whether a placement will be cited by ChatGPT or Perplexity when a prospect asks about your category.

The gap is structural. Your agency is doing exactly what you hired them to do. You hired them for the wrong deliverable.

MIT Sloan Management Review (January 2026) found that market-leading brands with the largest search investments are going invisible in AI search, while smaller competitors appear in their place. One financial services executive sat in a meeting and watched a prospect pull up ChatGPT to research the category. The executive's firm — the largest market share holder — wasn't in the answer. A smaller competitor was. The budget wasn't the difference. The brief was.

Three changes to the brief that shift your citation rate

These aren't agency overhauls. They're three specific additions to how you scope and verify every placement.

1. Define the citation trigger phrase before anything else

Most briefs say something like: "We want coverage that positions us as leaders in [space]." That's a headline brief. It optimizes for what a journalist titles their piece.

What you need is coverage that includes a specific phrase or category association an AI engine can retrieve. The difference between "AI-powered revenue intelligence platform" and "a revenue tech platform" matters. The first phrase is precise enough to get cited when someone asks ChatGPT about AI revenue intelligence tools. The second disappears into generic coverage.

Before every placement, define two things: the exact category phrase you want associated with your brand name, and the specific question you want the placement to answer when an AI engine retrieves it. Share both with the writer and the editor. Good journalists will work with this — they respond to specificity.

2. Require primary source material in every brief

The arXiv research shows that AI engines heavily weight earned media from authoritative domains. What it also shows is that citation quality matters, not just publication tier.

Placements built around original data or named research get retrieved more consistently than opinion coverage. When you brief a placement, require at least one primary source reference that involves your brand — your own data, a named customer outcome with a measurable result, or a study you commissioned. This gives the AI engine an anchor: a factual claim with a traceable source.

Forrester's AEO guidance (November 2025) makes this explicit. The category of earned media that actually moves AI brand visibility is what they call "expert communications" — coverage of something your brand did or said, backed by evidence, not features that mention you in passing. A placement where a reporter quotes your VP of Marketing saying "we're seeing AI momentum" rarely gets retrieved. A placement where your VP cites internal usage data and a named customer outcome with a measurable result gets pulled by the engine because there's something to pull.

Forrester's zero-click search guidance (July 2025) frames it simply: brands that want to appear in AI-powered search need investment in expert communications, public relations, and customer advocacy — not just reach. That framing matters because it redefines what "a good placement" means operationally.

3. Verify citation 30 days after publication, not on publish day

Publish day is the wrong moment to measure. Whether you appear in AI answers 30 days later is what matters.

Run 5–10 queries your buyers are likely to ask. Vary them across ChatGPT, Perplexity, and Google AI Overviews — they pull from different source sets. For each result, record: does your brand appear? Is the cited source the new placement, or something older? What problem context does the AI place your brand in?

Track this per placement over time. You'll start to see which publications and which angle types actually drive citation — and which ones only show up in your coverage report.

The WSJ's GEO primer (January 2026) named this metric "citation velocity" — the rate at which new earned media translates into AI mentions. It's the indicator that separates PR programs working for AI visibility from PR programs working for brand awareness. Right now most teams have no way to tell which one they're running.

What doesn't close the gap

More volume. A hundred placements briefed the old way will not fix your AI citation rate. What changes it is the specificity, measurability, and primary source quality of each placement — not the count.

The operators who have figured this out have not necessarily spent more on PR. They've changed what they ask for. They run the post-publication citation check and they use what they find to brief the next placement more precisely. That feedback loop is the actual program.

This is what Machine Relations defines as the infrastructure layer: earned media placements in publications AI engines already trust, built to get cited, not just read. What PR got exactly right was earned media. What most PR programs still get wrong is the brief behind it, because they were never designed for machine readers. The AEO playbook gives you the full framework if you want to build this from scratch; the three brief changes above are where to start this week.

The publications AI engines cite are the same ones that shaped human brand perception for decades. What changed is that those publications now determine whether AI recommends you when your buyer is researching. Your competitor who appears in that ChatGPT answer didn't get there through a bigger ad budget or better SEO. They got there through a better brief.

Run the visibility audit to see where your brand currently shows up in AI answers for your category, and which placements are actually driving citation.

Related Reading