NVIDIA HGX B200

NVIDIA HGX™ B200 is an unified AI platform for develop-to-deploy pipelines for businesses of any size at any stage in their AI journey.

NVIDIA HGX B200

HGX™ B200 HGX platform unified AI platform defines the next chapter of generative AI by taking full advantage of NVIDIA Blackwell GPUs and high-speed interconnects.

Configured with eight Blackwell GPUs, HGX B200 delivers unparalleled generative AI performance with a massive 1.4 terabytes (TB) of GPU memory and 64 terabytes per second (TB/s) of HBM3e memory bandwidth, and 14.4 TB/s of all-to-all GPU bandwidth, making it uniquely suited to handle any enterprise AI workload.

Highlights

One Platform for Develop-to-Deploy Pipelines

Enterprises require massive amounts of compute power to handle complex AI datasets at every stage of the AI pipeline, from training to fine-tuning to inference. With NVIDIA HGX B200, enterprises can arm their developers with a single platform built to accelerate their workflows.

Powerhouse of AI Performance

Powered by the NVIDIA Blackwell architecture’s advancements in computing, HGX B200 delivers 3X the training performance and 15X the inference performance of HGX H100. As the foundation of NVIDIA HGX BasePOD™ and NVIDIA HGX SuperPOD™, HGX B200 delivers leading-edge performance for any workload.

Proven Infrastructure Standard

HGX B200 is a fully optimized hardware and software platform that includes the complete NVIDIA AI software stack, including NVIDIA Base Command and NVIDIA AI Enterprise software, a rich ecosystem of third-party support, and access to expert advice from NVIDIA professional services.

Technical Specification

NVIDIA HGX B200 Specifications
GPU 8x NVIDIA Blackwell GPUs
GPU Memory 1,440GB total GPU memory
Performance 72 petaFLOPS training and 144 petaFLOPS inference
Power Consumption ~14.3kW max
CPU 2 Intel® Xeon® Platinum 8570 Processors
112 Cores total, 2.1 GHz (Base),
4 GHz (Max Boost)
System Memory Up to 4TB
Networking 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI
  • Up to 400Gb/s InfiniBand/Ethernet
2x dual-port QSFP112 NVIDIA BlueField-3 DPU
  • Up to 400Gb/s InfiniBand/Ethernet
Management Network 10Gb/s onboard NIC with RJ45
100Gb/s dual-port ethernet NIC
Host baseboard management controller (BMC) with RJ45
Storage OS: 2x 1.9TB NVMe M.2
Internal storage: 8x 3.84TB NVMe U.2
Software NVIDIA AI Enterprise: Optimized AI Software
NVIDIA Base Command™: Orchestration, Scheduling, and Cluster Management
HGX OS / Ubuntu: Operating system
Rack Units (RU) 10 RU
System Dimensions Height: 17.5in (444mm)
Width: 19.0in (482.2mm)
Length: 35.3in (897.1mm
Operating Temperature 5–30°C (41–86°F)
Enterprise Support Three-year Enterprise Business-Standard Support for hardware and software
24/7 Enterprise Support portal access
Live agent support during local business hours