AI Hardware , NVIDIA Solution
Instock

NVIDIA DGX H100 Deep Learning Console 640GB SXM5

0 out of 5 (0)

✓ Equipped with 8x NVIDIA H100 Tensor Core GPUs SXM5

✓ GPU memory totals 640GB

✓ Achieves 32 petaFLOPS FP8 performance

✓ Incorporates 4x NVIDIA® NVSwitch™ Link

✓ System power usage peaks at ~10.2kW

✓ Employs Dual 56-core 4th Gen Intel® Xeon® Scalable processors

✓ Provides 2TB of system memory

✓ Offers robust networking, including 4x OSFP ports, NVIDIA ConnectX-7 VPI, and options for 400 Gb/s InfiniBand or 200 Gb/s Ethernet

✓ Features 10 Gb/s onboard NIC with RJ45 for management network, with options for a 50 Gb/s Ethernet NIC

✓ Storage includes 2x 1.9TB NVMe M.2 for OS and 8x 3.84TB NVMe U.2 for internal storage

✓ Comes pre-loaded with NVIDIA AI Enterprise software suite, NVIDIA Base Command, and choice of Ubuntu, Red Hat Enterprise Linux, or CentOS operating systems

✓ Operates within a temperature range of 5–30°C (41–86°F)

✓ 3 year manufacturer parts or replacement warranty included (return-to-base only)

     
Get this product for
$375,000.00
vipera
Get it in 10 days
Estimate for 682345
vipera
Will be delivered to your location via DHL
Inquiry to Buy

In stock, however lead times can be around 6-8 weeks regardless due to EUS certification filing on export requirements. Availability, pricing and allocations may fluctuate daily. All sales final. No returns or cancellations. For bulk inquiries, consult a live chat agent or call our toll-free number.


NVIDIA DGX H100 - The Gold Standard for AI Infrastructure

NVIDIA DGX H100 powers business innovation and optimization. The latest iteration of NVIDIA’s legendary DGX systems and the foundation of NVIDIA DGX SuperPOD™, DGX H100 is an AI powerhouse that features the groundbreaking NVIDIA H100 Tensor Core GPU. The system is designed to maximize AI throughput, providing enterprises with a highly refined, systemized, and scalable platform to help them achieve breakthroughs in natural language processing, recommender systems, data analytics, and much more. Available on-premises and through a wide variety of access and deployment options, DGX H100 delivers the performance needed for enterprises to solve the biggest challenges with AI.

    SpecificationDescription
    GPU8x NVIDIA H100 Tensor Core GPUs SXM5
    GPU memory640GB total
    Performance32 petaFLOPS FP8
    NVIDIA® NVSwitch™4x
    System power usage~10.2kW max
    CPUDual 56-core 4th Gen Intel® Xeon® Scalable processors
    System memory2TB
    Networking4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI; 400 Gb/s InfiniBand or 200 Gb/s Ethernet; 2x dual-port NVIDIA ConnectX-7 VPI; 1x 400 Gb/s InfiniBand; 1x 200 Gb/s Ethernet
    Management network10 Gb/s onboard NIC with RJ45; 50 Gb/s Ethernet optional NIC; Host baseboard management controller (BMC) with RJ45
    StorageOS: 2x 1.9TB NVMe M.2; Internal storage: 8x 3.84TB NVMe U.2
    SoftwareNVIDIA AI Enterprise – Optimized AI software; NVIDIA Base Command – Orchestration, scheduling, and cluster management; Ubuntu / Red Hat Enterprise Linux / CentOS – Operating system
    SupportComes with 3-year business-standard hardware and software support
    Operating temperature range5–30°C (41–86°F)
Review this product
Your Rating
Choose File

No reviews available.