Over the weekend, a Chinese AI start-up funded by quant fund manager Liang Wenfeng released a new version of its large language model (LLM), named DeepSeek. Early reports indicate that DeepSeek’s performance is similar to that of U.S. competitors such as OpenAI and Meta.
The key difference, however, is DeepSeek’s claim that its model requires fewer high-performance chips (e.g., those produced by Nvidia) and therefore may use significantly less power. Although the exact reduction in power usage has not been quantified, these developments raise several questions for investors:
- Will the AI models market still be dominated by U.S. companies as new, more advanced competitors emerge?
- What does this mean for hyperscalers’ capital expenditure, given that AI-directed investments by large-cap U.S. tech companies reached around USD 220 billion last year[1]?
- Looking beyond hardware and software, what are the implications for the data centre industry?
- How could this affect the U.S. power industry?
As infrastructure investors, we do not have the expertise to offer a view on the first question. That is better addressed by technology-focused managers who monitor evolving AI model capabilities. However, we can provide insights on the latter issues.
Impact on Hyperscalers’ Capex
Over the last five years, capital expenditures by leading large-cap U.S. tech companies ballooned from approximately USD 80 billion to USD 220 billion as of 2024. The majority of this spending focused on AI-related hardware (for instance, Nvidia’s revenues increased from USD 12 billion to USD 129 billion over the same period), data centre investments, and research and development.
If AI models become substantially more efficient, we would expect the industry to become less capital-intensive, all else being equal. In turn, that could lead to more affordable AI products for both hyperscalers and their customers, potentially at the expense of the AI hardware industry.
What Does This Mean for the Hardware Industry?
If DeepSeek’s claimed performance and cost to develop are validated, hyperscalers might reevaluate their hardware spending. After all, if fewer chips—or even older models—can run AI products as effectively, why purchase one million units of the latest Nvidia chips?
Looking Beyond Hardware: Implications for Data Centres
Should large language models (LLMs) become significantly more power-efficient, we expect a neutral or even positive impact on the data centre industry. As infrastructure investors, we are relatively agnostic to the specific AI technology in use. Data centre operators typically pass power costs through to their customers, which means lower AI power requirements could spur greater overall usage of data centre space.
Currently, the chief bottleneck in building new data centres—especially in the U.S.—is the limited availability of power. This shortage has sparked a “power rush,” with hyperscalers securing long-term power supply contracts at premium prices. Nvidia’s latest Blackwell chip, for example, can require 60kW to 120kW per rack, driving high energy demand.
If future AI models demand less power, data centres should benefit. The same amount of power would support more data centre racks, effectively increasing demand for space while reducing the significance of power constraints.
Effects on the U.S. Power Industry
Based on DeepSeek’s claims of more efficient AI models, U.S. power requirements in the AI sector might turn out to be lower than initially projected. In the U.S., the AI sector is expected to consume around 100 TWh by 2027, against total electricity consumption of about 4,200 TWh in 2022[2]. Over 2024–2025, heightened expectations around AI-related power usage led to a strong appreciation in U.S. power prices nationwide.
U.S. independent power producers (IPPs) have capitalized on this trend by signing long-term contracts at premium prices with technology companies seeking to secure their future AI power needs. Anticipation of additional lucrative deals drove IPP stock prices significantly higher in 2024–2025 (Constellation Energy +199%, Vistra Energy +401%, Talen Energy +283%, NRG Energy +123%[3]). If AI’s power requirements decline, these companies could see a pullback as expectations for high-priced, long-term contracts are revised downward.
Conclusion
The race for more efficient AI models is accelerating, and DeepSeek’s new release highlights this trend by posing a direct challenge to U.S. dominance in AI. Although the exact cost to develop and power savings have yet to be confirmed, such efficiency improvements would likely be neutral—or even positive—for data centre operators. Lower power needs can spur more data centre usage, as space, not power, may become the key constraint.
However, for power producers—particularly those that have benefited from recent AI-driven demand—the outlook could be less favourable. Long-term power requirements for AI could be revised downward, weakening the case for additional premium-priced contracts.
Currently, our Infrastructure Securities fund holds positions in three data centre companies, representing 11% of our assets, and has only limited exposure to companies with power production capabilities, such as NextEra Energy and Entergy.