Nvidia - PCIe Bandwidth Utilization.

(172)
Fiscal Year (Ending Jan) Flagship Data Center GPU (Architecture) PCIe Interface Max Bandwidth (GB/s)
2021 A100 (Ampere) PCIe 4.0 x16 ~32
2022 A100 (Ampere) PCIe 4.0 x16 ~32
2023 H100 (Hopper) PCIe 5.0 x16 ~64
2024 H100 (Hopper) PCIe 5.0 x16 ~64
2025 H100 / Blackwell PCIe 5.0 x16 ~64

Notes:

The table displays the maximum theoretical bandwidth for the PCIe interface on NVIDIA's flagship data center GPUs for each fiscal year. "PCIe Bandwidth Utilization" is not a static figure reported by NVIDIA; it is a real-time metric that varies based on the specific application and workload running. The values represent the evolution of the underlying hardware capabilities.

The transition from PCIe 4.0 to 5.0 doubled the available bandwidth for the GPU to communicate with the rest of the system. For even higher bandwidth GPU-to-GPU communication, NVIDIA uses its proprietary NVLink technology. The "nvidia-smi" command-line tool can be used to monitor real-time PCIe bandwidth usage on a specific GPU.

Follow Ups

No follow-up discussions yet.