About

Overview

The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. Based on the NVIDIA Hopper™ architecture, the NVIDIA H200 is the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s)—that’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth. The H200’s larger and faster memory accelerates generative AI and large language models, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.

Characteristics
Technical Specifications: FP64: 34 TFLOPS, FP64 Tensor Core: 67 TFLOPS, FP32: 67 TFLOPS, TF32 Tensor Core: 989 TFLOPS, BFLOAT16 Tensor Core: 1,979 TFLOPS, FP16 Tensor Core: 1,979 TFLOPS, FP8 Tensor Core: 3,958 TFLOPS, INT8 Tensor Core: 3,958 TFLOPS, GPU Memory: 141GB, GPU Memory Bandwidth: 4.8TB/s, Decoders: 7 NVDEC, 7 JPEG, Confidential Computing: Supported, Max Thermal Design Power (TDP): Up to 700W (configurable), Multi-Instance GPUs: Up to 7 MIGs @18GB each, Form Factor: SXM, Interconnect: NVIDIA NVLink 900GB/s, PCIe Gen5 128GB/s, Server Options: NVIDIA HGX H200 partner and NVIDIA-Certified Systems with 4 or 8 GPUs, NVIDIA AI Enterprise: Add-on
Overview

The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. Based on the NVIDIA Hopper™ architecture, the NVIDIA H200 is the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s)—that’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth. The H200’s larger and faster memory accelerates generative AI and large language models, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.

Characteristics
Technical Specifications: FP64: 34 TFLOPS, FP64 Tensor Core: 67 TFLOPS, FP32: 67 TFLOPS, TF32 Tensor Core: 989 TFLOPS, BFLOAT16 Tensor Core: 1,979 TFLOPS, FP16 Tensor Core: 1,979 TFLOPS, FP8 Tensor Core: 3,958 TFLOPS, INT8 Tensor Core: 3,958 TFLOPS, GPU Memory: 141GB, GPU Memory Bandwidth: 4.8TB/s, Decoders: 7 NVDEC, 7 JPEG, Confidential Computing: Supported, Max Thermal Design Power (TDP): Up to 700W (configurable), Multi-Instance GPUs: Up to 7 MIGs @18GB each, Form Factor: SXM, Interconnect: NVIDIA NVLink 900GB/s, PCIe Gen5 128GB/s, Server Options: NVIDIA HGX H200 partner and NVIDIA-Certified Systems with 4 or 8 GPUs, NVIDIA AI Enterprise: Add-on
Up to $50 OFF your first order
Join SMTTR — built by engineers
NVIDIA by NVIDIA

NVIDIA H200 SXM Tensor Core GPU

Regular price
$31,450.00
Sale price
$31,450.00
Regular price
$33,105.26
Shipping calculated at checkout.
Configuration:
In Stock Now
Upgrade your tech collection with the latest must-have item, available now in limited quantities.
Shipping & Fulfillment
All products ship from verified U.S. distributors or SMTTR testing facility
Returns & Refunds
Returns accepted within 30 days
Warranty & Support
All products covered under manufacturer warranty (NVIDIA, AMD, etc.); SMTTR provides first-layer support and will coordinate RMA if needed.

End of Summer Sale - Up to 15% OFF

Check Out These Related Products