About

Overview

The NVIDIA DGX B300 is not just a server; it is a fully integrated AI factory designed to solve the world’s most complex computing challenges. Built on the revolutionary NVIDIA Blackwell Ultra architecture, this system sets a new standard for AI infrastructure, combining unprecedented compute density, massive memory capacity, and lightning-fast networking into a single, production-ready platform. Whether you are training trillion-parameter models or deploying real-time reasoning engines, the DGX B300 provides the horsepower needed to turn raw data into intelligence.

Key Features & Performance

  • Unmatched Compute Power: Equipped with 8x NVIDIA Blackwell Ultra GPUs, the DGX B300 delivers up to 144 petaFLOPS of FP4 AI inference performance and 72 petaFLOPS of FP8 training performance. This represents a generational leap, offering 11x faster inference and 4x faster training compared to the Hopper generation, drastically reducing time-to-solution for the most demanding AI workloads.

  • Massive Memory Landscape: With a total of 2.3TB of high-bandwidth GPU memory (288GB per GPU), the system is engineered to handle massive datasets and the largest Mixture-of-Experts (MoE) models without memory bottlenecks. This expanded capacity allows developers to fit larger models into memory, speeding up processing and enabling more complex reasoning tasks.

  • Unified High-Speed Fabric: The system features 8x NVIDIA ConnectX-8 VPI network adapters, delivering up to 800Gb/s of throughput per port. This advanced networking architecture ensures seamless scaling across clusters, making it the ideal building block for NVIDIA DGX SuperPOD™ and BasePOD™ deployments.

  • Turnkey Enterprise AI: The DGX B300 comes fully integrated with the NVIDIA AI Enterprise software suite, providing a pre-optimized stack for generative AI, data science, and computer vision. From the operating system to the drivers and frameworks, every component is tuned for maximum performance and reliability, allowing your team to focus on innovation rather than infrastructure management.

Power & Cooling Infrastructure

  • High-Density Power Architecture: The system is engineered for maximum performance with a robust power subsystem capable of handling a peak consumption of approximately 14kW per node. It supports flexible power configurations, including N+N redundancy with up to 12 power supply units (PSUs) or direct connection to rack busbars for streamlined data center integration.

  • Thermal Management Options: To accommodate diverse data center environments, the platform is available in both air-cooled (8U) and liquid-cooled (4U) configurations. The liquid-cooled variant leverages direct-to-chip technology to capture significant heat loads, enabling higher rack densities and improved energy efficiency for large-scale deployments.

Software & Ecosystem

  • NVIDIA Base Command & Mission Control: Integrated management software simplifies the orchestration of AI workloads, providing a unified view of cluster health, job scheduling, and resource utilization. NVIDIA Mission Control enhances operational agility by automating essential tasks and ensuring infrastructure resiliency for AI factories at scale.

  • Full-Stack Optimization: The system ships with the NVIDIA DGX OS, which includes a tuned Linux kernel, optimized drivers, and the NVIDIA Container Toolkit. This pre-installed stack ensures out-of-the-box readiness for rapid deployment of containerized AI applications and seamless integration with MLOps workflows.

Characteristics
GPU 8x NVIDIA Blackwell Ultra SXM
Total GPU Memory 2.1 TB total, 62 TB/s HBM3e bandwidth
Performance FP4 Tensor Core* - 144 PFLOPS | 108 PFLOPS; FP8/FP6 Tensor Core** - 72 PFLOPS
System Memory 2 TB, configurable to 4 TB
NVIDIA NVLink™ Switch System 2x
NVIDIA NVLink Bandwidth 14.4 TB/s aggregate bandwidth
System Power 14.5 kW (Busbar); 15.1 kW (PSU)
CPU Intel Xeon 6776P processors
Networking 8x OSFP ports serving 8x NVIDIA ConnectX-8 VPI >Up to 800 Gb/s of NVIDIA InfiniBand/Ethernet; 2x dual-port QSFP112 NVIDIA BlueField®-3 DPU >Up to 400 Gb/s of NVIDIA InfiniBand/Ethernet
Management Network 1GbE onboard network interface card (NIC) with RJ45; 1GbE RJ45 host baseboard management controller (BMC)
Storage OS: 2x 1.9 TB NVMe M.2; Internal storage: 8x 3.84 TB NVMe E1.S
Software NVIDIA AI Enterprise (optimized AI software); NVIDIA Mission Control (AI data center operations and orchestration with NVIDIA Run:ai technology); NVIDIA DGX OS (operating system); Supports Red Hat Enterprise Linux / Rocky / Ubuntu
Rack Units 10
Operating Temperature 10C°–35C°
Support Three-year business-standard hardware and software support
Length 35.3"
Width 19.0"
Height 17.5"
Overview

The NVIDIA DGX B300 is not just a server; it is a fully integrated AI factory designed to solve the world’s most complex computing challenges. Built on the revolutionary NVIDIA Blackwell Ultra architecture, this system sets a new standard for AI infrastructure, combining unprecedented compute density, massive memory capacity, and lightning-fast networking into a single, production-ready platform. Whether you are training trillion-parameter models or deploying real-time reasoning engines, the DGX B300 provides the horsepower needed to turn raw data into intelligence.

Key Features & Performance

  • Unmatched Compute Power: Equipped with 8x NVIDIA Blackwell Ultra GPUs, the DGX B300 delivers up to 144 petaFLOPS of FP4 AI inference performance and 72 petaFLOPS of FP8 training performance. This represents a generational leap, offering 11x faster inference and 4x faster training compared to the Hopper generation, drastically reducing time-to-solution for the most demanding AI workloads.

  • Massive Memory Landscape: With a total of 2.3TB of high-bandwidth GPU memory (288GB per GPU), the system is engineered to handle massive datasets and the largest Mixture-of-Experts (MoE) models without memory bottlenecks. This expanded capacity allows developers to fit larger models into memory, speeding up processing and enabling more complex reasoning tasks.

  • Unified High-Speed Fabric: The system features 8x NVIDIA ConnectX-8 VPI network adapters, delivering up to 800Gb/s of throughput per port. This advanced networking architecture ensures seamless scaling across clusters, making it the ideal building block for NVIDIA DGX SuperPOD™ and BasePOD™ deployments.

  • Turnkey Enterprise AI: The DGX B300 comes fully integrated with the NVIDIA AI Enterprise software suite, providing a pre-optimized stack for generative AI, data science, and computer vision. From the operating system to the drivers and frameworks, every component is tuned for maximum performance and reliability, allowing your team to focus on innovation rather than infrastructure management.

Power & Cooling Infrastructure

  • High-Density Power Architecture: The system is engineered for maximum performance with a robust power subsystem capable of handling a peak consumption of approximately 14kW per node. It supports flexible power configurations, including N+N redundancy with up to 12 power supply units (PSUs) or direct connection to rack busbars for streamlined data center integration.

  • Thermal Management Options: To accommodate diverse data center environments, the platform is available in both air-cooled (8U) and liquid-cooled (4U) configurations. The liquid-cooled variant leverages direct-to-chip technology to capture significant heat loads, enabling higher rack densities and improved energy efficiency for large-scale deployments.

Software & Ecosystem

  • NVIDIA Base Command & Mission Control: Integrated management software simplifies the orchestration of AI workloads, providing a unified view of cluster health, job scheduling, and resource utilization. NVIDIA Mission Control enhances operational agility by automating essential tasks and ensuring infrastructure resiliency for AI factories at scale.

  • Full-Stack Optimization: The system ships with the NVIDIA DGX OS, which includes a tuned Linux kernel, optimized drivers, and the NVIDIA Container Toolkit. This pre-installed stack ensures out-of-the-box readiness for rapid deployment of containerized AI applications and seamless integration with MLOps workflows.

Characteristics
GPU 8x NVIDIA Blackwell Ultra SXM
Total GPU Memory 2.1 TB total, 62 TB/s HBM3e bandwidth
Performance FP4 Tensor Core* - 144 PFLOPS | 108 PFLOPS; FP8/FP6 Tensor Core** - 72 PFLOPS
System Memory 2 TB, configurable to 4 TB
NVIDIA NVLink™ Switch System 2x
NVIDIA NVLink Bandwidth 14.4 TB/s aggregate bandwidth
System Power 14.5 kW (Busbar); 15.1 kW (PSU)
CPU Intel Xeon 6776P processors
Networking 8x OSFP ports serving 8x NVIDIA ConnectX-8 VPI >Up to 800 Gb/s of NVIDIA InfiniBand/Ethernet; 2x dual-port QSFP112 NVIDIA BlueField®-3 DPU >Up to 400 Gb/s of NVIDIA InfiniBand/Ethernet
Management Network 1GbE onboard network interface card (NIC) with RJ45; 1GbE RJ45 host baseboard management controller (BMC)
Storage OS: 2x 1.9 TB NVMe M.2; Internal storage: 8x 3.84 TB NVMe E1.S
Software NVIDIA AI Enterprise (optimized AI software); NVIDIA Mission Control (AI data center operations and orchestration with NVIDIA Run:ai technology); NVIDIA DGX OS (operating system); Supports Red Hat Enterprise Linux / Rocky / Ubuntu
Rack Units 10
Operating Temperature 10C°–35C°
Support Three-year business-standard hardware and software support
Length 35.3"
Width 19.0"
Height 17.5"
Up to $50 OFF your first order
Join SMTTR — built by engineers
NVIDIA by NVIDIA

NVIDIA DGX B300 Server - 8x Blackwell Ultra GPU 288 GB - 3 yr Business Standard Support - Commercial

Regular price
$1.00
Sale price
$1.00
Regular price
Free Delivery on Orders $3,000+
* Prices and stock might change without notice due to high market volatility.
** Pay No Sales Tax Except in CA
Delivering a massive 2.3TB of GPU memory and 144 petaFLOPS of inference performance, this turnkey system is purpose-built to acc...  Read more
Delivering a massive 2.3TB of GPU memory and 144 petaFLOPS of inference performance, this turnkey system is purpose-built to accelerate massive LLM training, complex reasoning, and data-intensive workloads. Designed for the modern data center, the DGX B300 offers 11x faster inference and 4x faster training than previous generations, enabling enterprises to refine, deploy, and scale AI with hyperscale efficiency.

Condition: NEW

Shop with confidence! SMTTR is an authorized NVIDIA Partner Reseller

Warranty & Support
All products are covered under a 3-YEAR EXTENDED WARRANTY from the manufacturer & SMTTR.
Shipping & Fulfillment
All products ship from verified U.S. distributors or SMTTR testing facility
Returns & Refunds
Returns accepted within 30 days Learn more

Shop with confidence! SMTTR is an authorized NVIDIA Partner Reseller

Check Out These Related Products