AI Hardware , Servers
Instock

GIGABYTE G893-ZD1-AAX5 HGX™ B200 8-GPU SERVER

SKU:G893-ZD1-AAX5
0 out of 5 (0)

NVIDIA HGX™ B200 8-GPU 1,800GB/s GPU-to-GPU bandwidth with NVIDIA® NVLink™ and NVSwitch™ Dual AMD EPYC™ 9005/9004 Series Processors 12-Channel DDR5 RDIMM, 24 x DIMMs 2 x M.2 slots with PCIe Gen3 x4 and x1 interface

  • NVIDIA HGX™ B200
  • 1.8TB/s GPU-to-GPU bandwidth with NVIDIA NVLink™ and NVSwitch™
  • Dual AMD EPYC™ 9005/9004 Series Processors
  • 12-Channel DDR5 RDIMM, 24 x DIMMs
  • Dual ROM Architecture
  • Compatible with NVIDIA® BlueField®-3 DPUs and NVIDIA ConnectX®-7 NICs
  • 2 x 10Gb/s LAN ports via Intel® X710-AT2
  • 2 x M.2 slots with PCIe Gen3 x4 and x1 interface
  • 8 x 2.5" Gen5 NVMe hot-swap bays
  • 4 x FHHL dual-slot PCIe Gen5 x16 slots
  • 8 x FHHL single-slot PCIe Gen5 x16 slots
  • 6+6 3000W 80 PLUS Titanium redundant power supplies
     
Get this product for
$495,000.00
vipera
Get it in 10 days
vipera
Will be delivered to your location via DHL or UPS. Ask an agent if import tariffs apply.
Inquiry to Buy


Supports NVIDIA HGX™ B200 8-GPU

The NVIDIA HGX™ B200 propels the data center into a new era of accelerating computing and generative AI, integrating NVIDIA Blackwell Tensor Core GPUs with a high-speed interconnect to accelerate AI performance at scale. Configurations of eight GPUs deliver unparalleled generative AI acceleration alongside a remarkable 1.4 terabytes (TB) of GPU memory and 64 terabytes per second (TB/s) of memory bandwidth for 15X faster real-time trillion-parameter-model inference, 12X lower cost, and 12X less energy. This extraordinary combination positions HGX B200 as a premier accelerated x86 scale-up platform designed for the most demanding generative AI, data analytics, and high-performance computing (HPC) workloads. HGX B200 supports advanced networking options—at speeds up to 400 gigabits per second (Gb/s)—delivering the highest AI performance with NVIDIA Quantum-2 InfiniBand and the Spectrum™-X Ethernet platform. HGX B200 with NVIDIA® BlueField®-3 data processing units (DPUs) enable cloud networking, composable storage, zero-trust security, and GPU compute elasticity in hyperscale AI clouds.

Review this product
Your Rating
Choose File

No reviews available.

Related Products