Andreessen Horowitz: “Predicting the Generative AI Market Is Hard”

IBL News | New York

Generative AI is getting real traction from real companies: models like Stable Diffusion and ChatGPT are setting historical records for user growth and several applications in image generation, copywriting, and code writing have exceeded $100 million of annualized revenue.

• Infrastructure vendors are the biggest winners in this market so far, capturing the majority of dollars.

• Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. Many apps are also relatively undifferentiated since they rely on similar underlying AI models and haven’t discovered obvious network effects, or data/workflows, that is hard for competitors to duplicate.

• Most model providers, though responsible for the very existence of this market, haven’t yet achieved a large commercial scale. However, Given the huge usage of these models, large-scale revenues may not be far behind.

This is what the investors of Andreessen Horowitz have observed after meeting with dozens of startup founders and operators in large companies.

“Predicting what will happen next is much harder. But we think the key thing to understand is which parts of the stack are truly differentiated and defensible,” states the company.

“The first wave of generative AI apps are starting to reach scale, but struggle with retention and differentiation.”

This is Andreessen Horowitz’s preliminary view of the generative AI tech stack.

It’s estimated that 10-20% of total revenue in generative AI today goes to the big three clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

The biggest winner in generative AI so far is Nvidia. The company reported $3.8 billion of data center GPU revenue in the third quarter of its fiscal year 2023, including a meaningful portion for generative AI use cases.

Other hardware options do exist, including Google Tensor Processing Units (TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the game, is also entering the market with its high-end Habana chips and Ponte Vecchio GPUs.

“Models face unclear long-term differentiation because they are trained on similar datasets with similar architectures; cloud providers lack deep technical differentiation because they run the same GPUs; and even the hardware companies manufacture their chips at the same fabs.”