Tue, May 12, 2026
Mon, May 11, 2026
Sun, May 10, 2026

The Three Waves of AI Infrastructure Investment

AI investment has progressed from GPU compute power and High Bandwidth Memory to focusing on the energy requirements and power management needed for infrastructure.

The First Wave: The Compute Engine

In 2016, the investment landscape for AI was vastly different. While the concept of neural networks existed, the scale of current Large Language Models (LLMs) was unimaginable. The initial high-conviction call centered on Nvidia, the company providing the Graphics Processing Units (GPUs) necessary for parallel processing.

At the time, Nvidia was primarily viewed as a gaming company. However, the realization that GPUs were uniquely suited for the massive matrix multiplications required for deep learning transformed the company into the sole provider of the "brains" for AI. By backing Nvidia in 2016, investors were betting on the fundamental necessity of compute power, anticipating that every AI developer on earth would eventually require the same specialized hardware to train their models.

The Second Wave: The Memory Bottleneck

By 2024, the narrative shifted. While compute power remained vital, a new bottleneck emerged: the "memory wall." As GPUs became faster, the speed at which data could be moved from memory to the processor became the primary limiting factor. This led to a surge in the importance of High Bandwidth Memory (HBM).

Investors pivoted toward memory chip makers capable of producing HBM3 and HBM3e. This shift represented a move from the processor itself to the infrastructure that feeds the processor. Without the ability to move massive amounts of data quickly, the most powerful GPUs in the world would sit idle, waiting for information. The investment in memory makers was not a departure from the AI theme, but a logical progression in the hardware stack.

The Third Wave: The Energy Crisis

Currently, the focus has shifted again. Having solved for compute and memory, the industry has hit a physical wall: electricity and power management. The energy requirements for training and running LLMs are astronomical, threatening to outpace the capacity of existing electrical grids.

The third conviction call is centered on the energy ecosystem. This includes not only the generation of power--specifically the resurgence of nuclear energy and Small Modular Reactors (SMRs)--but also the hardware required to manage and cool the heat generated by dense clusters of AI chips. The thesis is simple: no matter how efficient the chips or how fast the memory, an AI data center cannot function without a stable, massive, and sustainable power source.

Key Details of the AI Infrastructure Cycle

  • Compute (2016-2023): Focused on GPUs and parallel processing; dominated by Nvidia's CUDA ecosystem.
  • Memory (2023-2024): Focused on High Bandwidth Memory (HBM) to eliminate the latency between data storage and processing.
  • Power (2024-Present): Focused on electrical grid stability, energy generation (Nuclear/SMRs), and advanced thermal management/cooling.
  • The Bottleneck Pattern: Investment value migrates from the primary component to the secondary support system once the primary component reaches scale.
  • Physical Constraints: The shift toward energy indicates that AI growth is now limited by physical infrastructure and geography rather than just software or chip design.

Implications for the Market

This progression suggests that the AI trade is maturing. It is moving away from the "magic" of the software and into the "gritty" reality of industrial engineering. The current emphasis on power implies that the next era of AI growth will be dictated by who can secure energy permits, who can build the most efficient power grids, and who can solve the cooling challenges of high-density server racks.

For investors, this pattern indicates that the "AI play" is no longer just about semiconductors. It is now an infrastructure play involving utilities, energy providers, and industrial cooling specialists. The sequence remains consistent: identify the current limiting factor of the system, and invest in the companies that provide the solution to that limitation.


Read the Full MarketWatch Article at:
https://www.marketwatch.com/story/the-investor-who-backed-nvidia-in-2016-and-a-memory-chip-maker-in-2024-now-has-a-third-conviction-call-beb2e7ac