Morning BriefGEO / AEO

VC Bet $21M That Better SEO Wins AI Visibility. The Data Says It's the Wrong Race.

daydream just raised $21M on the thesis that AI-native SEO optimization gets brands cited in AI search. Ahrefs analyzed 863,000 SERPs and found 62% of top-ranked pages don't appear in AI answers at all. The race isn't broken — it's the wrong track.

Jaxon Parrott|
VC Bet $21M That Better SEO Wins AI Visibility. The Data Says It's the Wrong Race.

On April 2nd, daydream — an AI-native SEO agency — closed a $15M Series A led by WndrCo, with First Round Capital and Basis Set Ventures along for the ride. Total funding: $21 million. The thesis is straightforward: SEO is being restructured by AI, most agencies can't keep up, and daydream's combination of automated SEO agents and human strategists will get brands the AI visibility they're losing.

It's a clean pitch. The problem is the data sitting underneath it.

Ahrefs analyzed 863,000 keyword SERPs in March 2026 and found that only 38% of AI Overview citations come from pages ranking in the top 10. Eighteen months ago, that number was 76%. At position #1 — the best possible organic ranking — a page has a 25% chance of being cited in an AI answer. Three out of four number-one pages get left out entirely.

Walker Sands published an even wider dataset. They analyzed nearly 45 million keywords across 828 enterprise B2B companies and found the median enterprise brand shows up in 3% of relevant AI Overview responses. These are companies with real SEO budgets, rankings, and content programs. Three percent.

That gap between ranking and citation is not a technical problem. You cannot optimize your way out of it.

What the numbers actually show

The pattern across multiple independent datasets is consistent enough to treat as structural.

SignalWhat it predictsWhat it doesn't predict
Google top-10 rankingClick traffic from blue linksAI citation rate
Organic keyword volumeTraditional search visibilityAI Overview presence
Backlink count (0.218 correlation)Domain authority scoreAI recommendation rate
Third-party editorial coverage (0.664 correlation)AI citation probabilityDirect click traffic

AI engines select for editorial authority, not ranking position. Ahrefs' data shows 62% of AI Overview citations come from content that doesn't rank in the top 10. Otterly AI's analysis of over 1 million citations across ChatGPT, Perplexity, and Google AI Overviews found that editorially vetted sources — not brand-owned content optimized for rankings — dominate AI responses.

Optimizing faster and more automatically for ranking signals moves the ranking needle. It does not move the citation needle.

Why the signals separated

Google built its original ranking algorithm around backlinks because backlinks were a proxy for editorial trust: other humans decided this was worth linking to. AI engines are doing something similar, one layer up. When deciding what to cite in response to "best [category] software for enterprises," they are not asking who ranks highest. They are asking which brands the publications they already trust have vouched for.

That is a different question. It requires different work to answer.

The brands showing up consistently in AI answers share one pattern: they appear in publications that AI engines treat as high-authority sources. TechCrunch. Forbes. Harvard Business Review. Vertical trade media with genuine editorial standards. These placements are not ranking signals — they are trust signals. The AI engine encountered them during training and continues to weight them during retrieval.

As I wrote in my piece on why I coined Machine Relations, PR convinced humans to cover you. Machine Relations is the discipline of convincing the machines. No SEO agent, however fast, generates a Forbes placement. Those are earned through editorial relationships — the same relationships that determined brand credibility long before AI search existed.

And as Christian Lehman's shortlist audits have shown repeatedly — from Adyen to Fortinet to close CRM — companies with strong product reviews and solid SEO footprints still go completely dark in AI-generated answers when they lack the publication tier presence AI engines trust.

What the VC bet actually reveals

I'm not arguing daydream built something worthless. SEO execution still matters for click traffic, traditional SERP visibility, and content infrastructure that feeds other channels. $21M raised suggests a real market pulling in that direction.

But the pitch conflates two separate problems. The companies losing ground in AI answers are not losing because their technical SEO is slow. They're losing because they don't have the third-party editorial presence that AI engines use to decide who gets cited.

Walker Sands found that 4.6% of enterprise B2B brands have zero AI Overview mentions — not despite ranking, but alongside ranking. They appear in traditional search. They disappear in AI answers. That is not a problem faster SEO execution solves.

This is the gap Machine Relations as a discipline was built to address. Earned media placements in trusted publications were always how brand credibility transferred to human readers. AI engines now run the first pass on that credibility transfer — using the same publications, the same editorial gatekeepers, the same third-party authority signals. The mechanism didn't change. The reader did.

AuthorityTech's research comparing earned vs. owned content citation rates shows that placements in high-authority publications generate AI citations at rates owned content simply cannot match, regardless of optimization level. The $21M just went into the ranking lane. The citation lane has a different entry requirement.


FAQ

Why do top-ranked pages miss AI citations? Ranking in Google's top 10 signals relevance to an algorithm built on link graphs and content signals. AI Overview citation uses a different signal: editorial trust from publications AI engines already treat as authoritative. A page can rank #1 and still be excluded from an AI answer without third-party corroboration. Ahrefs found 62% of AI citations come from outside the top 10 — a figure that has nearly doubled since early 2024.

What's the difference between AI-native SEO and Machine Relations? AI-native SEO optimizes owned content and technical signals for better ranking, which may improve citation probability at the margin. Machine Relations operates on the upstream authority layer: earned placements in the publications AI engines already trust. SEO improves content structure. Machine Relations changes whether the brand appears in the trusted third-party sources AI engines pull from by default.

Should brands stop doing SEO and focus entirely on earned media? No — they serve different objectives. SEO drives click traffic from traditional search, still a large channel. Earned media in trusted publications drives AI citation and third-party authority. The error is assuming that improving SEO performance automatically improves AI visibility. The data from Ahrefs, Walker Sands, and Otterly AI all show these have separated into distinct performance tracks requiring different investment decisions.

Related Reading


Run a free AI visibility audit to see exactly where your brand appears — or doesn't — in AI-generated answers, and which publications are currently setting the citation floor in your category.