Google's Information Gain Update Is Live. Here's the 4-Question Audit to Run Before April.
Google's March 2026 core update now weights Information Gain as a ranking signal. Pages that restate what's already ranking are losing ground. Here's the exact four-question audit to identify which pages are at risk and what to fix before the rollout finishes.
Google's March 2026 broad core update started rolling out on March 6 and is still settling. If you missed the news, the headline number is this: over 55% of monitored websites have seen ranking changes in the first three weeks.
That's not unusual for a core update. What's unusual is what the update is measuring.
For the first time, Google is giving meaningful ranking weight to Information Gain — a concept the company has held patents on for years but never applied at this scale. The algorithm now evaluates whether your page adds something genuinely new compared to what already ranks for the same query. Pages that restate the top 10 results, even with clean formatting and solid word counts, are losing ground. Pages with original data, first-hand case studies, and proprietary analysis are gaining an average of 22% visibility, according to early tracking from multiple SEO research teams.
This rolled out alongside two other changes that compound the pressure:
The Gemini 4.0 Semantic Filter is now part of Google's quality scoring. It targets content produced at scale without meaningful editorial input — content that reads well but doesn't say anything the existing index doesn't already say. Content farms running automated pipelines without human expertise are reporting double-digit visibility drops.
Stricter E-E-A-T enforcement. A study cited by Search Engine Journal found that 72% of top-ranking pages now display detailed author credentials, up from 58% before the update. If your team is publishing without named authors who have verifiable expertise, you're now at a measurable disadvantage.
The three signals reinforce each other. A page written by an identifiable expert, containing original data, with genuine editorial judgment applied — that page checks all three boxes simultaneously. A page written by "Editorial Team" that restates what competitors already published, even if a human technically edited it, fails all three.
Why This Matters More Than a Typical Core Update
Most core updates shift rankings without changing the type of content that wins. This one changes the type.
Previous updates asked: Is this content well-structured? Is it relevant to the query? Does it load fast?
This update adds: Does this content tell the reader something they can't already find?
That's a different question. And it hits B2B content programs especially hard, because the standard B2B playbook — research what's ranking, cover the same subtopics more thoroughly, publish — is exactly the pattern Information Gain penalizes.
The Pew Research Center reported that click rates drop from 15% to 8% when AI summaries appear. Google's own AI Overviews now appear on roughly 20% of queries. In that environment, the pages that survive organic competition are the ones Google considers genuinely additive to its index — not just competent restatemets of what it already knows.
The 4-Question Content Audit
Run this on every page that drives meaningful organic traffic. Start with the 20 pages that generate the most impressions in Google Search Console.
Question 1: Does this page contain data, findings, or examples that don't appear in the current top 5 results?
Open the top 5 ranking pages for your target query. Read them. Then read your page.
If your page covers the same subtopics with the same types of evidence and the same conclusions, it has zero Information Gain. Google's systems can now detect that.
What to look for:
| High Information Gain | Zero Information Gain |
|---|---|
| Proprietary benchmark data from your own customers | Industry stats sourced from the same reports everyone else cites |
| A specific case study with named outcomes | "Many companies have found that..." |
| An original framework you developed from working in the problem | A repackaged version of a competitor's framework |
| Contradicting conventional wisdom with evidence | Restating conventional wisdom with more words |
The fix: If a page has no original contribution, you either add one or accept that the page will lose ground over the next 30 days. Adding original data is the single highest-leverage change. Research from Princeton and Georgia Tech found that adding statistics improves AI citation rates by 30-40%. That same principle now applies to organic ranking through Information Gain.
Question 2: Is there a named human expert attached to this content?
Check your author bylines. If the page lists "Marketing Team" or "Staff Writer" or has no author at all, that's now a ranking liability.
72% of top-ranking pages display detailed author credentials. That number was 58% before this update. The delta is large enough that Google is clearly weighting authorship signals more heavily, especially in B2B categories where the buying decision involves evaluating credibility.
The fix: Attach a named author with a linked bio page that includes their relevant credentials, role, and external mentions. If the content covers technical territory, the author should have demonstrable experience in that domain. Generic bios don't clear the bar.
Question 3: Would an AI tool produce substantially the same content from the same brief?
This is the Gemini 4.0 Semantic Filter test. Google isn't penalizing AI-assisted content. It's penalizing content where the AI did the thinking and a human just edited the grammar.
Read your page and ask: if I gave ChatGPT the target keyword and a 200-word brief, would the output cover the same ground in the same way? If yes, the page is at risk.
What survives the filter:
- Content that includes first-person operational experience ("When we ran this for a client in financial services, the result was...")
- Content that takes a position and defends it with evidence rather than neutrally summarizing all perspectives
- Content that addresses failure modes and edge cases that only come from actually doing the work
What doesn't:
- Comprehensive overview posts that cover everything at surface level
- "Complete guide" formats that organize publicly available information without adding to it
- Content that reads fluently but says nothing a competent reader couldn't have guessed
Question 4: Does this page address a question the top results answer poorly or not at all?
Pull up your Search Console data. Filter for queries where you rank positions 8-20 with decent impressions but low CTR. These are the pages where you're close enough to matter but not differentiated enough to win.
For each one, check whether the top results have a clear gap — a question they don't answer, a perspective they don't cover, a data point they're missing. If your page fills that gap, it has Information Gain. If it doesn't, it's competing on the same ground with the same material, and this update will push it down.
The fix: Don't add more content to a page that already covers the same territory as competitors. Add content that covers what competitors missed. One original section with a specific, verifiable claim does more for Information Gain than 2,000 additional words of restatement.
The Audit in a Table
| Question | What You're Testing | Red Flag | Fix |
|---|---|---|---|
| 1. Original data or findings? | Information Gain | Same evidence as competitors | Add proprietary data, original case studies, or contradictory findings |
| 2. Named expert author? | E-E-A-T | "Staff Writer" or no byline | Attach named author with credentials and linked bio |
| 3. Could AI produce the same thing? | Semantic Filter | Generic overview format | Add first-person experience, positions, and failure modes |
| 4. Does it answer what competitors miss? | Gap coverage | Same ground, same depth | Find the unanswered question and build around it |
What Not to Do During the Rollout
The update hasn't finished settling. Google's own guidance is to wait at least one full week after completion before drawing conclusions. The rollout is expected to wrap in early April.
Two mistakes to avoid:
Don't delete pages that dropped. Consolidation can work, but deleting pages mid-rollout removes data you need for diagnosis. Wait until the update completes, identify the pattern, then make structural changes.
Don't add word count for the sake of depth. Information Gain is not about length. A 1,200-word page with one original finding scores higher than a 4,000-word page that restates what's already in the index. Google's systems are measuring novelty, not volume.
The Larger Shift
This update is the organic search version of a trend that's already playing out in AI citation behavior. AI engines like ChatGPT and Perplexity pull from sources they consider authoritative and novel. Moz's 2026 analysis of 40,000 AI Mode queries found that 88% of AI Mode citations don't appear in the organic top 10. The content that gets cited — both by AI engines and now by Google's own ranking system — is content that adds something the existing index doesn't have.
The brands that build this into their content operations now are building for both channels simultaneously. Original data, named expertise, editorial judgment applied to every piece. That's what earned authority looks like in practice — not just a placement strategy, but a content quality standard that compounds across every discovery surface.
The Machine Relations framework calls this the convergence: the same signals that earn AI citations — third-party credibility, editorial depth, named expertise — are the signals Google is now weighting more aggressively in organic search. The infrastructure is the same. Earned placements in publications AI systems trust feed both Google's Information Gain scoring and AI engines' citation pools. Companies investing in citation architecture aren't optimizing for one channel — they're building the credibility layer both channels draw from.
Run the four-question audit on your top 20 pages this week. The update finishes in early April. What you fix before then determines whether you're on the right side of a 22% average visibility shift.
If you want to see which of your pages AI engines currently cite and where the gaps are, the visibility audit maps it by query type — organic and AI side by side.
Sources:
- Google Search Status Dashboard — March 2026 Core Update rollout confirmation
- Search Engine Journal SEO Pulse — March 2026 analysis, E-E-A-T data (72% author credential finding)
- ALM Corp Digital Marketing News — March 11-20 analysis, Information Gain and Gemini 4.0 reporting
- Moz 2026 AI Mode Analysis — 40,000 queries, 88% citation non-overlap finding
- Princeton/Georgia Tech GEO Research — Aggarwal et al., SIGKDD 2024, statistics and AI citation rates
- Pew Research Center — July 2025 study, click rate impact of AI summaries