China plans 39 AI data centers with 115,000 restricted Nvidia Hopper GPUs — move raises alarm over sourcing, effectiveness of bans

Nvidia Hopper H100 GPU and DGX systems
(Image credit: Nvidia)

Chinese companies are preparing to equip 39 new AI data centers — mostly in Xinjiang and Qinghai — with over 115,000 high-performance Nvidia Hopper GPUs, whose shipments to China are restricted by the U.S. export rules. Restrictions on shipments of Nvidia's H100 and H200 GPUs to China did not stop local authorities in Xinjiang and Qinghai from authorizing the construction of rather huge data centers, reports Bloomberg. Furthermore, even slowing demand for AI compute performance does not seem to slow down data center building in China.

A massive cluster

70% of the processing capacity — enabled by around 80,500 of Nvidia's H100 and H200 GPUs — is expected to be concentrated in a single state-owned data center located in Yiwu County, Xinjiang. The remaining 30% will be spread across at least a few dozen additional data center projects (38 to be exact), largely in Xinjiang, outside of Yiwu and in Qinghai province. One of the larger projects is run by Nyocor, which plans to install 625 H100 DGX servers with around 5,000 H100 accelerators in multiple phases, starting with the first phase involving 250 8-way machines (2,000 H100 GPUs). These plans are based on official investment documents, tenders, and filings reviewed by Bloomberg.

To put the number of Hopper GPUs — 115,000 in total and 80,500 for the data center in Yiwu County, Xinjiang — into context, it took Elon Musk's xAI around 100,000 of H100 processors to train the Grok 3 AI model, which is one of the most advanced models currently available. DeepSeek trained its R1 model using a GPU cluster of 50,000 Nvidia Hopper GPUs, which included 30,000 H20 HGX units, 10,000 H800s, and 10,000 H100s. It is unknown what DeepSeek used to train its R2 model.

Since Chinese companies are state-controlled entities, they tend not to disclose specifications or performance of their AI clusters, making it difficult to tell how the data center with 80,500 GPUs stacks up against China's existing AI clusters. Nonetheless, if it comes to fruition, we might be dealing with one of the most powerful AI data centers in the People's Republic. So, a cluster featuring around 80,000 Hopper H100 and H200 GPUs significantly strengthens China's AI infrastructure in general. Furthermore, it can be used to train advanced large language models (LLMs) and large reasoning models (LRMs).

According to Chinese government statements cited by Bloomberg, Xinjiang has already built a data center that provides '24,000 PetaFLOPS of processing power,' which is said to be equivalent to around 12,000 Nvidia H100s, available to other cities like Chongqing. To attract investors, local officials also offer a 20% discount on electricity, along with financial and housing incentives for experts in AI and green technologies.

Tens of thousands of Hopper GPUs for China?

Bloomberg estimates that completing all 39 projects as envisioned would require the procurement of more than 14,000 servers using either H100 or H200 processors, worth billions of dollars on China's black market.

While operators of data centers in Xinjiang and Qinghai or their customers could probably use H20 HGX processors for their workloads, they clearly indicated in their permit documents to authorities what they plan to use, suggesting that they need high performance of H100 and high memory capacity of H200 for their projects. Depending on exact metrics, H100 can be 3.3 – 6.69 times faster than cut-down H20 with AI data formats and 1.52 – 64 times faster with HPC data formats. That said, even with linear performance scaling (which is not how things work when clustering AI compute hardware), one would need 380,000 – 770,000 H20s to substitute 115,000 H100 GPUs.

Swipe to scroll horizontally

GPU

HGX H20

H100 SXM

Difference

Architecture | GPU

Hopper | GH100

Hopper | GH100

-

Memory

96 GB HBM3

80 GB HBM3

0.83X

Memory Bandwidth

4.0 TB/s

3.35 TB/s

0.83X

INT8 | FP8 Tensor (dense)

296 TFLOPS

1980 TFLOPS

6.69X

BF16 | FP16 Tensor (dense)

148 FLOPS

495 TFLOPS

3.34X

TF32 Tensor (dense)

74 TFLOPS

495 TFLOPS

3.69X

FP32

44 TFLOPS

67 TFLOPS

1.52X

FP64

1 TFLOPS

34 TFLOPS

34X

RT Core

N/A

N/A

-

MIG

Up to 7 MIG

Up to 7 MIG

-

L2 Cache

60 MB

60 MB

-

Media Engine

7 NVDEC, 7 NVJPEG

7 NVDEC, 7 NVJPEG

-

Power

400W

700W

1.75X

Form Factor

8-way HGX

8-way HGX

-

Interface

PCIe Gen5 x16: 128 GB/s

PCIe Gen5 x16: 128 GB/s

-

NVLink

900 GB/s

900 GB/s

-

There are no explanations of how these parts will be sourced, but we know that there are plenty of GPU servers smuggled to China.

People with knowledge of U.S. government investigations said they were unaware of the specific Xinjiang projects, but confirmed to Bloomberg the existence of some unauthorized Nvidia hardware in China. However, they expressed doubt that any organized network could supply over 100,000 restricted processors to one country, let alone in a single region. Estimates on the total number of such chips in China vary. Two senior officials in the Biden administration mentioned a figure closer to 25,000, far less than what the Chinese projects require.

To date, there is no direct proof that China has accumulated, or will soon receive, the more than 115,000 restricted GPUs outlined in these construction plans. Still, work on the facilities continues. In Yiwu, where most of the activity is taking place, a large solar power tower has been erected to provide consistent electricity. The location was chosen for its access to solar and wind energy, inexpensive land, and high elevation, which helps to cool hardware.

When it comes to the availability of H100 and H200 GPUs, there are millions of Hopper parts installed across the world, and at least some data centers that use them will be switching them out for higher-performance Blackwell accelerators. The elephant in the room, however, is that there will be plenty of decommissioned H100 GPUs in the coming months or quarters. Procuring them and smuggling them to China is something that looks reasonable to do for companies from the People's Republic. But, there is no evidence that they have attempted to do this so far.

Nvidia has repeatedly stressed that there is no indication of large-scale diversion of its GPUs to China from other countries, such as Singapore. However, smuggling through Southeast Asia has become a point of concern, especially in Malaysia and Singapore. Singapore is currently prosecuting individuals for allegedly exporting AI servers containing restricted components to Malaysia, from where they might end up in China. U.S. officials have asked Malaysian authorities to take action against unauthorized technology transfers, and Malaysia has stated it will act if presented with solid evidence.

Jeffrey Kessler of the U.S. Commerce Department’s Bureau of Industry and Security recently told lawmakers that unauthorized transfers of AI GPUs to China are indeed occurring. So while Nvidia is not suspected of any misconduct, authorities in Washington are investigating how banned GPUs might be entering China, reports Bloomberg.

Xinjiang, particularly the Hami region, is already a hub for renewable power. The local government has been actively promoting the development of computing infrastructure using its vast energy reserves.

There is a catch

Nvidia has commented on the issue to Bloomberg, saying that building functioning AI infrastructure with unofficial and/or used parts is both risky and impractical. The company also stated it provides no operational or technical support for restricted products in China or elsewhere. While lack of support may not be a problem for small deployments used for research purposes, large machines involving thousands or tens of thousands of GPUs are usually clustered together with Nvidia specialists to maximize their performance and efficiency.

However, as Chinese entities do not have a choice, they might attempt to build large Nvidia-based clusters without working with the company, even at the cost of performance and efficiency.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

TOPICS
Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • jp7189
    I could see people willing to sell used H100s on the secondary market if the price were high enough to buy brand new Blackwell versions. It would be harder to control that kind of movement.
    Reply