Why Nvidia just poured $2 billion into AI ASIC competitor Marvell — NVLink Fusion turns into soft ecosystem lock-in

MEMBER EXCLUSIVE
Marvell building
(Image credit: Marvell)

On Tuesday, Nvidia announced that it has invested $2 billion in Marvell Technology and entered a partnership through NVLink Fusion, the rack-scale platform that allows third-party silicon to plug into Nvidia's proprietary interconnect fabric. The deal covers custom XPUs, NVLink-compatible scale-up networking, silicon photonics, and AI-RAN infrastructure for 5G and 6G networks.

It’d be an understatement to say that this deal is unusual, given Marvell’s status as one of the two dominant custom ASIC design houses, alongside Broadcom. Its fastest-growing business is designing the custom AI accelerators that hyperscalers like AWS, Microsoft, and Google use to reduce their dependence on Nvidia GPUs.

Article continues below

NVLink Fusion, announced at Computex 2025 last May, enables heterogeneous AI infrastructure where non-Nvidia accelerators can communicate with Nvidia GPUs, CPUs, and networking hardware over NVLink's high-bandwidth, low-latency fabric. NVLink delivers up to 1.8 TB/s per GPU, a huge bandwidth advantage over PCIe Gen5, and can scale to 72 accelerators per rack in its NVL72 configuration.

The platform is built around the OCP MGX rack architecture and includes a modular technology stack consisting of Nvidia GPUs, Vera CPUs, NVLink switch silicon, ConnectX SuperNICs, Bluefield DPUs, Spectrum-X switches, and Mission Control management software. Partners can plug their own custom XPUs or CPUs into the compute layer, but the surrounding infrastructure is all Nvidia.

Every NVLink Fusion platform must include at least one Nvidia product, whether a CPU, GPU, or switch. Nvidia has also retained control over which partners receive NVLink IP licenses, so custom chips designed to displace Nvidia's GPUs still generate the company revenue through infrastructure sales every time a rack goes live. Under the deal, Marvell will provide custom XPUs and NVLink Fusion-compatible scale-up networking, while Nvidia will supply the rest of the stack, including Vera CPUs, ConnectX NICs, Bluefield DPUs, NVLink interconnect, and Spectrum-X switches.

Marvell's ASIC business

Marvell reported $8.2 billion in revenue for its fiscal year 2026 ending January 2026, with data center revenue accounting for more than 74% of the total.

Custom AI compute is the fastest-growing segment within that business, and Marvell's client list reads like a directory of companies actively building alternatives to Nvidia GPUs. AWS is its largest custom-silicon customer, with Marvell helping develop the Trainium series of AI accelerators.

Microsoft is also working with Marvell, among others, on its Maia AI accelerator, and it’s understood that Google has partnered with Marvell on its Axion Arm CPU for cloud workloads. In each case, the explicit objective is to give the hyperscaler a cheaper, more efficient (or more customizable) alternative to buying Nvidia products at scale.

The custom ASIC market is growing fast. Counterpoint Research estimated in January 2026 that global AI server compute ASIC shipments will triple between 2024 and 2027. Broadcom is projected to retain a 60% market share in ASIC design services by 2027, with Marvell facing some design-win headwinds but still doubling its shipment volumes over that period. By investing in Marvell and binding its custom XPUs to the NVLink fabric, Nvidia ensures it retains a revenue position even in racks where its GPUs have been replaced.

The Marvell deal is the second $2 billion investment Nvidia has made in 2026. The first, announced in January, went to AI cloud provider CoreWeave, which rents access to Nvidia CPUs. This was widely described as an example of the circular financing arrangements that have lifted AI company valuations: Nvidia invests capital in a customer, and that customer uses it to buy more Nvidia hardware. Nvidia already held a 7% stake in CoreWeave and has committed to buying more than $6 billion in its services through 2032.

That’s pretty different from the deal just struck with Marvell. CoreWeave focuses on the demand-side of things — fund a customer to buy more GPUs. Marvell focuses on the supply side, co-opting the company designing the alternative silicon itself. Instead of fighting the custom ASIC trend, Nvidia is absorbing it into its own infrastructure.

The Marvell deal is the latest in a series of NVLink Fusion expansions. Samsung Foundry joined in October to offer design-to-manufacturing support for NVLink-compatible custom chips on its 3nm and 2nm nodes, giving Nvidia a second major foundry partner after TSMC. Then Arm entered the program in November, enabling its licensees to build CPUs with native NVLink connectivity, which opens the door for hyperscalers like Google, Meta, and Microsoft to integrate NVLink directly into their own Arm-based SoCs. SiFive joined in January 2026, bringing RISC-V into the ecosystem. Fujitsu, Qualcomm, MediaTek, Alchip, Astera Labs, Synopsys, and Cadence were among the original partners announced at Computex.

On the other side sit AMD, Intel, and Broadcom, all of which are backing the Ultra Accelerator Link (UALink) consortium as an open industry-standard alternative to NVLink. UALink's 1.0 specification supports up to 1,024 GPUs with 200 GT/s bandwidth, but the standard has yet to ship in production hardware, while NVLink is already deployed at scale in Blackwell NVL72 racks.

Broadcom's absence from NVLink Fusion is pretty interesting, given it’s the other half of the custom ASIC duopoly. Broadcom has been Google's TPU design partner for over a decade, spanning six generations of the chip, and also works with Meta on its MTIA accelerator and reportedly with OpenAI on a custom ASIC. If Broadcom's clients eventually face pressure to deploy their custom chips in NVLink-compatible racks, the current dividing line between the NVLink Fusion camp and the UALink camp could shift.

The partnership also covers two secondary but just-as-significant areas. Marvell acquired Celestial AI in early 2026 for $3.25 billion, adding photonic fabric technology to its portfolio. Optical interconnects are becoming critical as AI clusters scale beyond the distances where electrical signals maintain their integrity, and Marvell's optical DSP products are already widely used in pluggable modules for data center networking. The company's fiscal year 2027 revenue target for data center switches is above $600 million, roughly double its fiscal year 2026 figure.

Meanwhile, the AI-RAN part of the Nvidia-Marvell collab will target the transformation of telecomm infrastructure into AI-capable networks using Nvidia's Aerial platform for 5G and 6G. This is a smaller market today, but both companies are positioning for a buildout that would embed AI processing directly into the radio access network.

"The inference inflection has arrived. Token generation demand is surging, and the world is racing to build AI factories," said Jensen Huang on the investment. Whatever the use-case may be, Nvidia wants to be a critical part of AI infrastructure. And by opening the gates and partnering with Marvell, his company is broadening the size of its walled garden.

TOPICS
Luke James
Contributor

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.