-
NVIDIA's L2 PCle: Elevating Performance in a Compact Form Factor
NVIDIA's L2 PCle, part of the Ada Lovelace architecture, stands out as a powerful GPU designed for optimal performance in a compact 1-slot LP form factor. With 24 GB of GDDR6 memory featuring ECC, the L2 PCle boasts a memory bandwidth of 300 GB/s, ensuring seamless handling of data-intensive tasks. Equipped with versatile Tensor Cores for INT8, FPB, BF16, and FP16 operations, the GPU delivers an impressive processing power of 24.1 TFLOPS in FP32. Its inclusion of RT Cores enhances real-time ray tracing capabilities, while support for Multi-Instance GPU (MIG) allows for efficient resource allocation. The L2 PCle's efficient 36 MB L2 Cache and robust media engine, featuring 2 NVENC (+AV1), 4 NVDEC, and 4 NVJPEG units, contribute to its versatility. With power consumption details yet to be disclosed, the L2 PCle offers a promising solution for applications demanding high performance within constrained spaces.
Nvidia L2 Enterprise 24GB
✓ GPU Architecture: NVIDIA Ada Lovelace
✓ GPU Memory: 24 GB GDDR6 with ECC
✓ Memory Bandwidth: 300 GB/s
✓ Tensor Cores:
- INT8 | FPB Tensor Core®: 193 TFLOPS
- BF16 | FP16 Tensor Core*: 96.5 TFLOPS
- TF32 Tensor Core*: 48.3 TFLOPS
✓ FP32 Processing Power: 24.1 TFLOPS
✓ FP64 Processing Power: N/A
✓ RT Core: Yes
✓ MIG Support: Yes
✓ L2 Cache: 36 MB
✓ Media Engine:
- 2 NVENC (+AV1)
- 4 NVDEC
- 4 NVJPEG
✓ Power Consumption: TBD
✓ Form Factor: 1-slot LP
✓ Interconnect: PCIe Gen4 x16 with a speed of 64 GB/s
COMING SOON
Specification | L2 PCle |
---|---|
GPU Architecture | NVIDIA Ada Lovelace |
GPU Memory | 24 GB GDDR6 with ECC |
Memory Bandwidth | 300 GB/s |
Tensor Cores | INT8 |
BF16 | |
TF32 Tensor Core*: 48.3 TFLOPS | |
FP32 Processing Power | 24.1 TFLOPS |
FP64 Processing Power | N/A |
RT Core | Yes |
MIG Support | Yes |
L2 Cache | 36 MB |
Media Engine | 2 NVENC (+AV1), 4 NVDEC, 4 NVJPEG |
Power Consumption | TBD |
Form Factor | 1-slot LP |
Interconnect | PCIe Gen4 x16 with a speed of 64 GB/s |
Reviews
There are no reviews yet.