Web Search, Built on Links, Starts to Shift Away Toward LLM Platforms

IBL News | New York

Web search, built on links, started to shift away from traditional browsers toward LLM platforms in 2025, according to a report by Andreessen Horowitz.

The foundation of the $80 billion+ SEO market just cracked with Apple’s announcement that AI-native search engines like Perplexity and Claude will be built into Safari, said the VC firm. This put Google’s distribution chokehold in question.

“A new paradigm is emerging, one driven not by page rank, but by language models. We’re entering Act II of search: Generative Engine Optimization (GEO),” stated the report.

Page ranks are determined by indexing sites based on keyword matching, content depth and breadth, backlinks, and user experience engagement.

However, today, it’s not about ranking high on the results page. LLMs are the new interface for how people find information. Visibility is obtained by showing up directly in the answers of LLMs like GPT-4, Gemini, and Claude.

Users’ queries are longer (averaging 23 words vs. 4), sessions are deeper (averaging 6 minutes), and responses provide personalized, multi-source synthesis, remembering and showing reasoning, rather than just relying on keywords.

Additionally, the business model and incentives have changed. Google monetizes user traffic through ads; users are paid with their data and attention. In contrast, most LLMs are paywalled, subscription-driven services.

However, an ad market may eventually emerge on top of LLM interfaces, but the rules, incentives, and participants would likely look very different than traditional search.

New monitoring platforms, such as ProfoundGoodie, and Daydream, enable brands to analyze how they appear in AI-generated responses.

Tools like Ahrefs’ Brand Radar track brand mentions in AI Overviews, enabling companies to understand how they’re framed and remembered by generative engines. Semrush has a dedicated AI toolkit designed to help brands track perception across generative platforms, optimize content for AI visibility, and respond quickly to emerging mentions in LLM outputs.