About

Overview

The NVIDIA® A100 80GB PCIe card delivers unprecedented acceleration to power the world’s highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. NVIDIA A100 Tensor Core technology supports a broad range of math precisions, providing a single accelerator for every compute workload. The NVIDIA A100 80GB PCIe supports double precision (FP64), single precision (FP32), half precision (FP16), and integer (INT8) compute tasks.

The NVIDIA A100 80GB card is a dual-slot 10.5 inch PCI Express Gen4 card based on the NVIDIA Ampere GA100 graphics processing unit (GPU). It uses a passive heat sink for cooling, which requires system airflow to properly operate the card within its thermal limits. The NVIDIA A100 80GB PCIe operates unconstrained up to its maximum thermal design power (TDP) level of 300 W to accelerate applications that require the fastest computational speed and highest data throughput. The latest generation A100 80GB PCIe doubles GPU memory and debuts the world’s highest PCIe card memory bandwidth up to 1.94 terabytes per second (TB/s), speeding time to solution for the largest models and most massive data sets.

The NVIDIA A100 80GB PCIe card features Multi-Instance GPU (MIG) capability, which can be partitioned into as many as seven isolated GPU instances, providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands. When using MIG to partition an A100 GPU into up to seven smaller instances, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100 80GB versatility means IT managers can maximize the utility of every GPU in their data center.

NVIDIA A100 80GB PCIe cards use three NVIDIA® NVLink® bridges that allow multiple A100 80GB PCIe cards to be connected together to deliver 600 GB/s bandwidth or 10x the bandwidth of PCIe Gen4, in order to maximize application throughput with the larger workloads.

Characteristics
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 (TF32) 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS*
GPU Memory 80GB HBM2e
GPU Memory Bandwidth 1,935GB/s
Max Thermal Design Power (TDP) 300W
Multi-Instance GPU Up to 7 MIGs @ 10GB
Form Factor PCIe dual-slot air cooled or single-slot liquid cooled
Interconnect NVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s ** PCIe Gen4: 64GB/s
Server Options Partner and NVIDIACertified Systems™ with 1-8 GPUs
Overview

The NVIDIA® A100 80GB PCIe card delivers unprecedented acceleration to power the world’s highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. NVIDIA A100 Tensor Core technology supports a broad range of math precisions, providing a single accelerator for every compute workload. The NVIDIA A100 80GB PCIe supports double precision (FP64), single precision (FP32), half precision (FP16), and integer (INT8) compute tasks.

The NVIDIA A100 80GB card is a dual-slot 10.5 inch PCI Express Gen4 card based on the NVIDIA Ampere GA100 graphics processing unit (GPU). It uses a passive heat sink for cooling, which requires system airflow to properly operate the card within its thermal limits. The NVIDIA A100 80GB PCIe operates unconstrained up to its maximum thermal design power (TDP) level of 300 W to accelerate applications that require the fastest computational speed and highest data throughput. The latest generation A100 80GB PCIe doubles GPU memory and debuts the world’s highest PCIe card memory bandwidth up to 1.94 terabytes per second (TB/s), speeding time to solution for the largest models and most massive data sets.

The NVIDIA A100 80GB PCIe card features Multi-Instance GPU (MIG) capability, which can be partitioned into as many as seven isolated GPU instances, providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands. When using MIG to partition an A100 GPU into up to seven smaller instances, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100 80GB versatility means IT managers can maximize the utility of every GPU in their data center.

NVIDIA A100 80GB PCIe cards use three NVIDIA® NVLink® bridges that allow multiple A100 80GB PCIe cards to be connected together to deliver 600 GB/s bandwidth or 10x the bandwidth of PCIe Gen4, in order to maximize application throughput with the larger workloads.

Characteristics
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 (TF32) 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS*
GPU Memory 80GB HBM2e
GPU Memory Bandwidth 1,935GB/s
Max Thermal Design Power (TDP) 300W
Multi-Instance GPU Up to 7 MIGs @ 10GB
Form Factor PCIe dual-slot air cooled or single-slot liquid cooled
Interconnect NVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s ** PCIe Gen4: 64GB/s
Server Options Partner and NVIDIACertified Systems™ with 1-8 GPUs
Up to $50 OFF your first order
Join SMTTR — built by engineers
NVIDIA by NVIDIA

NVIDIA A100 80GB PCIe GPU

Regular price
$17,950.00
Sale price
$17,950.00
Regular price
$19,944.44
Shipping calculated at checkout.
GPU Memory:
In Stock Now
Upgrade your tech collection with the latest must-have item, available now in limited quantities.
Shipping & Fulfillment
All products ship from verified U.S. distributors or SMTTR testing facility
Returns & Refunds
Returns accepted within 30 days
Warranty & Support
All products covered under manufacturer warranty (NVIDIA, AMD, etc.); SMTTR provides first-layer support and will coordinate RMA if needed.

End of Summer Sale - Up to 15% OFF

Check Out These Related Products