AMD discussed the limiting factors of AI accelerator development at ISC 2025 — notably, the increasing power requirements of these bleeding-edge chips. ComputerBase reports that AMD expects ZettaFLOP-capable supercomputers of the future to require a nuclear power plant's worth of energy to operate.
AMD shared a graph of the projected growth of supercomputer power consumption until 2035. The graph starts between 2010 and 2015, when supercomputers required just 3.2GF/watt to run. The graph then extends (in a straight line) all the way to 2035, when AMD predicts zetta-scale supercomputers will require 2140GF/watt of power, or half a gigawatt of power. The graph assumes a 2x efficiency improvement in AI processor development every 2.2 years.
Memory bandwidth and cooling capacity are supposedly the main actors responsible for increasing power consumption to such highly predicted levels. As AI hardware increases in computational power, memory bandwidth, and datacenter cooling systems must increase to keep up. This creates a snowball effect of ever-increasing power consumption across all areas in a datacenter.
Expounding this problem further is the demand for FP128, FP64, FP16, and FP8 compute capabilities. Even though FP64 and FP128 provide superior accuracy, some workloads are more useful when run in FP16 and FP8. Thus, future AI accelerators will need to be capable of performing lower precision operations.
We are already seeing power consumption skyrocketing with today's latest AI accelerators. Nvidia's B200 has a TDP of 1000W, and AMD's brand-new MI355X sports a 1,400W TDP. By contrast, Nvidia's A100 flagship AI GPU from 5 years ago consumed just 400W of power — less than an RTX 5090.
The U.S. government is hoping to rectify this growing energy situation before it becomes a problem with nuclear power plants. Several big companies, such as Microsoft, are also investing heavily in nuclear fusion to solve their datacenter power problems.
Supercomputers are still firmly in the ExaFLOP range, with the ElCaptain AMD-MI300A-based supercomputer being the fastest supercomputer in the world, currently. However, full-blown AI-datacenter farms are now reaching zettaFLOP (Zettascale) performance — with Oracle being the first to provide a zettascale cloud computing cluster, boasting an army of 131,072 Blackwell GPUs (equating to 2.4 zettaFLOPS of performance).