BUILDING WORLD-CLASS SUPERCOMPUTING HAS NEVER BEEN EASIER
Data science teams are at the leading edge of AI innovation, developing projects that can transform enterprises and our world. But they're often left searching for spare compute cycles that can help train the most complex models. These teams need a dedicated AI platform that can plug in anywhere and is fully optimized across hardware and software to deliver groundbreaking performance for multiple, simultaneous users anywhere in the world.
- AI workgroup server delivering 2.5 petaFLOPS of performance that your team can use without limits— for training, inference, and data analytics
- Server-grade, plug-and-go, and doesn't require data center power and cooling
- World-class AI platform, with no complicated installation or IT help needed
- The world's only workstation-style system with four fully interconnected NVIDIA A100 Tensor Core GPUs and up to 640 gigabytes (GB) of GPU memory
- Delivers a fast-track to AI transformation with NVIDIA know-how and experience
Supercomputing for Data Science Teams
Effortlessly providing multiple, simultaneous users with a centralized AI resource, DGX Station A100 is the workgroup appliance for the age of AI. It's capable of running training, inference, and analytics workloads in parallel, and with MIG, it can provide up to 28 separate GPU devices to individual users and jobs so that activity is contained and doesn't impact performance across the system. DGX Station A100 features the same fully optimized NVIDIA DGX™ software stack as all DGX systems, delivering maximum performance and complete interoperability with DGX-based infrastructure, from individual systems to NVIDIA DGX POD™ and NVIDIA DGX SuperPOD™, making DGX Station A100 an ideal platform for teams from all organizations, large and small.
Data Center Performance Without the Data Center
NVIDIA DGX Station A100 provides a data center-class AI server in a workstation form factor, suitable for use in a standard office environment without specialized power and cooling. Its design includes four ultra-powerful NVIDIA A100 Tensor Core GPUs, a top-of-the-line, server-grade CPU, super-fast NVMe storage, and leading-edge PCIe Gen4 buses. DGX Station A100 also includes the same Baseboard Management Controller (BMC) as NVIDIA DGX A100, allowing system administrators to perform any required tasks over a remote connection. DGX Station A100 is the most powerful AI system for an office environment, providing data center technology without the data center.
Bigger Models, Faster Answers
NVIDIA DGX Station A100 isn't a workstation. It's an AI workgroup server that can sit under your desk. In addition to its 64-core, data center-grade CPU, it features the same NVIDIA A100 Tensor Core GPUs as the NVIDIA DGX A100 server, with either 40 or 80 GB of GPU memory each, connected via high-speed SXM4. NVIDIA DGX Station A100 is the only office-friendly system that has four fully interconnected GPUs, leveraging NVIDIA® NVLink®, and that supports MIG, delivering up to 28 separate GPU devices for parallel jobs and multiple users—without impacting system performance.
NVIDIA DGX 4x A100 80GB AI Workstation
AI projects at the forefront of innovation, shattering world records, are built on NVIDIA DGX™ systems. Leading organizations across industries use DGX to power their AI initiatives and change the world.
21 in stock
Manufacturer Part Number
Manufacturer Website Address
DGX Station A100
Graphics Computing System
Processor & Chipset
Number Of Processors Installed
Number Of GPUs Installed
- 432 Tensor
- 6912 CUDA
Total GPU Memory
320 GB (4x NVIDIA A100 80 GB)
Display & Graphics
Graphics Controller Manufacturer
Graphics Controller Model
4x Mini DisplayPort Connector
1x 7.68 TB U.2 NVMe
1x 1.92 TB NVMe
Solid State Drive
Network & Communication
Number Of Network Devices
1x 1 GbE BMC
Ubuntu Desktop Linux OS
100-120 V AC
System Power Consumption