Your AI Citations Expire in 4.5 Weeks. Here's the Platform-by-Platform Refresh Playbook.
New research on 3.5 million citation events reveals the exact half-life of AI citations by platform. ChatGPT churns fastest at 3.4 weeks. Perplexity holds longest at 5.8 weeks. The refresh cadence your team needs for each.
Your AI citations have a shelf life, and it is shorter than most teams assume. Scrunch and Stacker analyzed 3.5 million citation events across AI platforms from September 2025 to March 2026 and measured how long sources persist in AI-generated answers. The median half-life, the point where 50% of a cohort's citations have dropped off, is 4.5 weeks. That means half the citations your content earned last month are already gone.
The study tracked cohorts of cited sources week by week and applied survival curve analysis with 200 bootstrap resamples to estimate confidence intervals. The data covers 120,000+ domains, eight industries, and six AI platforms. What emerged is that each platform has a distinct refresh cadence, and teams optimizing with a single strategy across all of them are wasting effort where it counts most.
The platform gap changes how you plan
Christian Lehman's take: the headline number is useful, but the platform-level variation is where the execution decision lives. The difference between the fastest and slowest platform is nearly 70%.
| Platform | Citation half-life | Operational implication |
|---|---|---|
| ChatGPT | 3.4 weeks | Content needs refreshing every 2-3 weeks to maintain visibility |
| Google AI Mode | 4.3 weeks | Monthly refresh cycles keep you in rotation |
| Google AI Overviews | 4.7 weeks | Monthly to 5-week refresh cadence |
| Google Gemini | 4.8 weeks | Similar to AI Overviews, slightly longer runway |
| Perplexity | 5.8 weeks | 6-week refresh window, citations compound longer here |
Source: Scrunch/Stacker, 3.5M citation events, March 2026
ChatGPT cycles through sources faster than any other platform at 3.4 weeks. If your AI visibility strategy prioritizes ChatGPT, which makes sense given its 400 million weekly active users, your refresh cadence needs to be roughly biweekly to stay in the answer rotation. A monthly publishing schedule is too slow for this surface.
Perplexity, by contrast, holds citations nearly 70% longer at 5.8 weeks. The compounding effect matters: citations that persist on Perplexity accumulate more buyer impressions over a longer window, which makes it the highest-ROI platform for content that requires significant production investment.
Google's three AI surfaces cluster in the mid-range, between 4.3 and 4.8 weeks. Christian Lehman breaks this down further: the consistency across AI Mode, Gemini, and AI Overviews suggests a shared underlying refresh cycle within Google's ecosystem. If you are visible on one Google surface, you are likely visible on all three. If you fall off one, expect the others to follow within a week.
Earned distribution doubles the citation window
The most actionable finding is the gap between editorial and non-editorial sources.
Content distributed through the Stacker editorial network, which syndicates to hundreds of trusted news publishers, held citations for a half-life of nearly 10 weeks, compared to 4.5 weeks for non-network domains. That is a 2.1x durability advantage, and it held across all six platforms and all eight industries tracked.
The strongest combination in the data set was Perplexity plus editorial distribution: 12.3 weeks. That is more than 3x the average non-network ChatGPT citation. A piece of content distributed through earned editorial channels on Perplexity stays in AI answers for three months.
The mechanism is structural. When content exists on a single domain, the AI model has one source to evaluate. If that source drops below a relevance threshold during a refresh cycle, the citation disappears. When the same content exists across dozens of trusted editorial domains, the model has multiple retrieval paths. Even as individual sources rotate out of the answer set, the underlying information persists because it appears in enough credible places to stay above the citation threshold.
This pattern aligns with what Muck Rack's Generative Pulse found across a different data set: 82% of all links cited by AI engines come from earned media sources. And the Fullintel-UConn academic study presented at IPRRC in February 2026 confirmed that 89% of AI-cited links were earned, unpaid media. The Scrunch/Stacker data adds the time dimension that was missing: earned media does not just get cited more often, it gets cited for longer.
The industry data narrows the decision
Citation decay varies by industry, though the range is tighter than the platform gap.
| Industry | Non-network half-life | With editorial distribution | Weeks gained |
|---|---|---|---|
| Real estate | 4.2 weeks | 10.6 weeks | +6.4 |
| Financial services | 4.6 weeks | 10.4 weeks | +5.8 |
| Retail and ecommerce | 4.1 weeks | 9.1 weeks | +5.0 |
| Healthcare | 4.0 weeks | 8.6 weeks | +4.6 |
| Marketing and advertising | 4.5 weeks | 8.9 weeks | +4.4 |
| Insurance | 4.8 weeks | 8.8 weeks | +4.0 |
| Tech and SaaS | 4.4 weeks | 7.0 weeks | +2.6 |
Source: Stacker/Scrunch, March 2026
Insurance and financial services citations are stickier, likely because the content in those verticals tends to be authoritative and slower to change. Healthcare and retail turn over faster, driven by recency sensitivity and the pace of product updates.
The finding that matters for operator planning: platform choice has more leverage on citation durability than vertical does. A healthcare brand on Perplexity (5.8-week baseline) outlasts an insurance brand on ChatGPT (3.4-week baseline). Your platform priority should come before your industry assumptions when building a refresh calendar.
The three-step refresh system that matches the data
Based on the platform half-lives, here is the minimum cadence to maintain continuous AI citation visibility.
Step 1: Map your platform exposure. Identify which AI platforms your target buyers actually use. Forrester's 2026 State of Business Buying found that 94% of B2B buyers use AI during purchasing, but usage patterns differ by buyer role and vertical. If your ICP skews toward enterprise procurement, ChatGPT and Perplexity are where the research starts. If your buyers are more Google-native, AI Mode and AI Overviews are the priority surfaces. Run five to ten key queries across each platform to baseline where you currently appear.
Step 2: Set platform-specific refresh cadences. ChatGPT content needs biweekly updates. Google surfaces need monthly updates. Perplexity content can run on a six-week cycle. The update does not mean a full rewrite. Substantive refreshes, a new stat from a current source, an updated example, a corrected claim, a visible timestamp change, are enough to reset the freshness clock. The GEO-16 framework found that pages meeting a quality threshold of 0.70 or higher with 12 pillar hits achieved a 78% cross-engine citation rate. Metadata freshness was the single strongest individual signal (Kumar et al., 2025).
Step 3: Invest in earned distribution for compounding durability. The 2.1x durability multiplier from editorial distribution is the single highest-leverage finding in this study. Every piece of content that earns placement in publications AI engines trust compounds in two ways: it gets cited more broadly (the 239% median lift in brand citations from earned distribution that AuthorityTech's research documented), and it holds those citations for roughly twice as long. The breadth times durability equation is the compounding mechanism that separates brands with stable AI visibility from brands caught in a publish-and-pray cycle. As Jaxon Parrott has written, the earned media mechanism that built brand authority for decades is the same mechanism that now builds citation architecture for AI engines.
Why citation cadence is the new editorial calendar
Most content teams still plan around a publish-and-forget model. Write the post. Distribute it once. Move on to the next brief. That model worked when search rankings were relatively stable and a page could hold position for months.
AI citations do not work that way. The Scrunch/Stacker data proves that AI visibility is a rotation, not a ranking. You earn your way into the answer set, and then the clock starts ticking. Half the value decays in a month. On ChatGPT, it decays in less than four weeks.
The operational shift Christian Lehman recommends: treat your citation calendar like a content maintenance system, not a production pipeline. For every new piece you publish, assign a refresh date by platform. Track which pages are in active AI rotation and which have fallen off. The teams that build this discipline now will hold AI visibility through cycles that decay their competitors' presence.
This is the infrastructure layer that Machine Relations describes as the operating discipline for the AI era. The mechanism underneath is the same one that has always made earned media the highest-leverage investment in brand authority: placements in publications that AI engines already index and trust create retrieval paths that survive model refreshes. The publications have not changed. The reader has. And now the data shows how quickly that new reader forgets.
If you want to see where your brand's AI citations currently stand, which platforms are holding, which are churning, and where the gaps are, AuthorityTech's visibility audit maps the full picture in about 15 minutes.
Frequently asked questions
What is AI citation half-life? Citation half-life is the number of weeks it takes for 50% of the sources cited in AI-generated answers to drop off. Scrunch and Stacker measured this across 3.5 million citation events from September 2025 to March 2026 and found a median of 4.5 weeks. The measure uses cohort-based survival analysis, tracking how many sources from a given week are still being cited in subsequent weeks.
Which AI platform has the fastest citation turnover? ChatGPT has the shortest half-life at 3.4 weeks, according to the Scrunch/Stacker study. This means content cited by ChatGPT falls out of rotation faster than on any other platform. Brands prioritizing ChatGPT visibility need the most aggressive refresh cadence.
Does earned media distribution extend citation durability? Yes. Content distributed through editorial networks lasted 2.1x longer than non-network content in the Scrunch/Stacker study, with a half-life of nearly 10 weeks versus 4.5 weeks. The strongest combination was Perplexity plus editorial distribution at 12.3 weeks. Muck Rack's separate analysis of over one million AI prompts found 82% of AI citations come from earned media sources.
How often should I update content for AI citation? The data suggests platform-specific cadences. ChatGPT content should be refreshed biweekly. Google AI surfaces (AI Mode, Gemini, AI Overviews) need monthly updates. Perplexity content can run on a six-week cycle. Updates do not require full rewrites. Refreshing statistics, updating examples, and adding visible timestamps are sufficient to signal freshness.
Does traditional SEO ranking predict AI citation durability? No. Moz's 2026 analysis of 40,000 queries found that 88% of Google AI Mode citations are not in the organic top 10. AI citation and traditional search ranking are increasingly independent systems, and citation durability depends on freshness, structural extractability, and earned authority rather than backlink profiles.