Japan to begin developing ZetaFLOPS-scale supercomputer in 2025

Lenovo
(Image credit: Lenovo)

Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT) has announced [PDF] plans to build a successor to the country's Fugaku supercomputer, which was once the world's most powerful HPC machine. The ministry wants RIKEN and Fujitsu to start developing the supercomputer next year, reports Nikkei.

A document by MEXT says this new supercomputer aims to achieve an unprecedented 50 ExaFLOPS of AI performance with Zetta-scale peak performance in mind to use AI for scientific purposes. The Zetta-class designation indicates a system capable of performing one sextillion floating-point operations per second. ZettaFLOPS is 1,000 times faster than ExaFLOPS, so if Japan manages to build such a system by 2030, as planned, it will likely again have the world's fastest supercomputer.  

MEXT wants each computational node of the Fugaku Next supercomputer to have peak performance of several hundred FP64 TFLOPS for double-precision computations, around 50 FP16 PFLOPS for AI-oriented half-precision calculations, and approximately 100 PFLOPS for AI-oriented 8-bit precision calculations, with memory bandwidths reaching several hundred TB/s using HBM-type memory. To put the number into context, the peak performance of a Fugaku computational node is 3.4 TFLOPS for double-precision calculations, 13.5 TFLOPS for half-precision calculations, and 1.0 TB/s for memory bandwidth. 

The Ministry of Education, Culture, Sports, Science, and Technology plans to allocate ¥4.2 billion ($29.06 million) in the first year of development, with total government funding expected to exceed ¥110 billion ($761 million).  

MEXT does not envision any particular architecture for the Fugaku Next supercomputer, though its documents suggest that it should use a CPU with special-purpose accelerators or a combination of CPU and GPU. Also, MEXT wants the supercomputer to feature an advanced storage system capable of handling both traditional I/O workloads for data science and large-scale checkpointing and new I/O requirements for AI workloads. 

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.