-
Giant Memory for Giant Models
Unlike existing AI supercomputers that are designed to support workloads that fit within the memory of a single system, NVIDIA DGX GH200 is the only AI supercomputer that offers a shared memory space of up to 144TB across 256 Grace Hopper Superchips, providing developers with nearly 500X more fast-access memory to build massive models. DGX GH200 is the first supercomputer to pair Grace Hopper Superchips with the NVIDIA NVLink Switch System, which allows up to 256 GPUs to be united as one data-center-size GPU. This architecture provides 48X more bandwidth than the previous generation, delivering the power of a massive AI supercomputer with the simplicity of programming a single GPU.
-
Super Power-Efficient Computing
As the complexity of AI models has increased, the technology to develop and deploy them has become more resource intensive. However, using the NVIDIA Grace Hopper architecture, DGX GH200 achieves excellent power efficiency. Each NVIDIA Grace Hopper Superchip is both a CPU and GPU in one unit, connected with superfast NVIDIA NVLink-C2C. The Grace™ CPU uses LPDDR5X memory, which consumes one-eighth the power of traditional DDR5 system memory while providing 50 percent more bandwidth than eight-channel DDR5. And being on the same package, the Grace CPU and Hopper™ GPU interconnect consumes 5X less power and provides 7X the bandwidth compared to the latest PCIe technology used in other systems.
-
Integrated and Ready to Run
Designing, integrating, and operationalizing a hyperscale data center tuned for massive-memory application development can be complex and time consuming. With DGX GH200, NVIDIA is not just a technology provider but a trusted partner that helps ensure success. As a fully tested and integrated solution with software, compute, and networking, that includes white-glove services spanning installation and infrastructure management to expert advice on optimizing workloads, DGX GH200 lets teams hit the ground running.
NVIDIA DGX GH200 Deep Learning Console
✓ 256x NVIDIA Grace Hopper Superchips
✓ 18,432 Arm® Neoverse V2 Cores
✓ 144TB GPU Memory
✓ 1 exaFLOPS Performance
✓ Comprehensive Networking Suite
✓ NVIDIA NVLink Switch System
✓ Host baseboard management controller
✓ Includes NVIDIA AI Enterprise, NVIDIA Base Command, and various OS options
✓ Three-year standard support
✓ Super power-efficient computing
✓ Fully integrated and ready-to-run solution
Lead times are in excess of 6 months. Pricing available upon request. For bulk inquiries, consult a live chat agent or call our toll-free number.
Specification | Details |
---|---|
CPU and GPU | 256x NVIDIA Grace Hopper Superchips |
CPU Cores | 18,432 Arm® Neoverse V2 Cores with SVE2 4X 128b |
GPU Memory | 144TB |
Performance | 1 exaFLOPS |
Networking | 256x OSFP single-port NVIDIA ConnectX®-7 VPI with 400Gb/s InfiniBand, 256x dual-port NVIDIA BlueField®-3 VPI with 200Gb/s InfiniBand and Ethernet, 24x NVIDIA Quantum-2 QM9700 InfiniBand Switches, 20x NVIDIA Spectrum™ SN2201 Ethernet Switches, 22x NVIDIA Spectrum SN3700 Ethernet Switches |
NVIDIA NVLink Switch System | 96x L1 NVIDIA NVLink Switches, 36x L2 NVIDIA NVLink Switches |
Management Network | Host baseboard management controller (BMC) with RJ45 |
Software | NVIDIA AI Enterprise (optimized AI software), NVIDIA Base Command (orchestration, scheduling, and cluster management), DGX OS / Ubuntu / Red Hat Enterprise Linux / Rocky (operating system) |
Support | Comes with three-year business-standard hardware and software support |
Power Efficiency | Uses NVIDIA Grace Hopper architecture for super power-efficient computing. Each NVIDIA Grace Hopper Superchip is both a CPU and GPU in one unit, connected with superfast NVIDIA NVLink-C2C. The Grace™ CPU uses LPDDR5X memory, which consumes one-eighth the power of traditional DDR5 system memory while providing 50 percent more bandwidth than eight-channel DDR5. |
Integration | Fully tested and integrated solution with software, compute, and networking, that includes white-glove services spanning installation and infrastructure management to expert advice on optimizing workloads. |
Reviews
There are no reviews yet.