Nvidia's data center networking division hit $11B last quarter — 267% YoY growth — quietly becoming the company's second-largest revenue driver.
Nvidia's networking division, built largely through its 2020 Mellanox acquisition, generated $11B in Q4 revenue and $31B for the full year — surpassing Cisco's entire networking business on a quarterly basis. The division includes NVLink, InfiniBand Switches, Spectrum-X Ethernet, and co-packaged optics — the full stack for building AI factories. CEO Jensen Huang positioned this as the 'backbone' of modern AI infrastructure, not a peripheral. The 267% YoY growth rate makes it one of the fastest-scaling enterprise hardware segments ever recorded.
Nvidia's networking stack — NVLink, InfiniBand, Spectrum-X — is no longer optional plumbing. It's the performance ceiling for distributed AI training. If you're running multi-GPU workloads and ignoring network topology, you're leaving significant throughput on the table. The gap between optimized and unoptimized networking in AI factories is now measured in training time, not milliseconds.
If you're running distributed training on AWS or Azure, benchmark your current inter-node bandwidth against NVLink-connected instances (e.g., p4d vs p5 on AWS) — the throughput delta will tell you whether your bottleneck is compute or networking.
Open the NVIDIA NVLink documentation at developer.nvidia.com and run the provided bandwidth test (nvbandwidth) on any multi-GPU instance you have access to — you'll get a concrete MB/s number showing your current interconnect ceiling.
Tags
Signals by role
Also today
Tools mentioned