- It costs nearly $80 billion to fill a single gigawatt of artificial intelligence facility.
- The planned capacity of artificial intelligence in the industry could reach 100 GW.
- High-end GPU hardware must be replaced every five years without renewal.
IBM CEO Arvind Krishna questions the current pace and scale of artificial intelligence data center expansion may ever remain financially sustainable under current assumptions.
He estimates that the cost of placing computing equipment in a single 1 GW facility is now close to $80 billion.
With public and private plans calling for about 100 GW of future capacity aimed at advanced model learning, the estimated financial risk rises to $8 trillion.
The Economic Burden of Next-Generation Artificial Intelligence Websites
Krishna directly links this trajectory to the renewal cycle that drives today's booster fleets.
Most high class GPU the equipment deployed at these centers depreciates in about five years.
After this period, operators do not renew the equipment, but replace it completely. The result is not a one-time hit to capital, but recurring liabilities that compound over time.
CPU resources also remain part of this allocation, but they are no longer at the center of spending decisions.
The balance has shifted toward dedicated accelerators that run massively parallel workloads at speeds unmatched by general-purpose processors.
This shift has significantly redefined the scope of modern AI facilities and increased capital requirements beyond what traditional enterprise data centers once required.
Krishna argues that depreciation is the factor that is most often misunderstood by market participants.
The pace of architectural change means that productivity leaps are occurring faster than financial write-offs can comfortably cover.
Hardware that is still functional becomes economically obsolete long before its physical lifespan ends.
Investors such as Michael Burry have expressed similar doubts about whether cloud giants can continue to extend asset life as model sizes and training requirements grow.
From a financial point of view, the burden is no longer associated with energy consumption or land acquisition, but with the forced abandonment of increasingly expensive equipment.
IN workstation-class, similar update dynamics already exist, but the scale within hyperscale sites is significantly different.
Krishna estimates that servicing the capital value of these multi-gigawatt campuses would require hundreds of billions of dollars in annual profits just to remain neutral.
This requirement is based on the current economics of hardware rather than speculative long-term efficiency gains.
These predictions come as leading technology companies announce the growth of artificial intelligence campuses, with power measured in tens of gigawatts rather than megawatts.
Some of these proposals already compete with the electricity demand of entire countries, raising parallel concerns about grid capacity and long-term electricity prices.
Krishna estimates there is almost zero probability that today L.L.M. achieve general intelligence on the next generation of hardware without fundamental changes in knowledge integration.
This assessment suggests that the investment wave is driven more by competitive pressures than by proven technological inevitability.
Interpretation is difficult to avoid. The development assumes that future earnings will scale in line with unprecedented spending.
This is happening even as depreciation cycles are shortened and power restrictions are tightened in many regions.
The risk is that financial expectations may outpace the economic mechanisms needed to support them throughout the life cycle of these assets.
By using Tom's Equipment
Follow TechRadar on Google News. And add us as your preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the “Subscribe” button!
And of course you can also Follow TechRadar on TikTok for news, reviews, unboxing videos and get regular updates from us on whatsapp too much.






