← Back to GPUs
NVIDIA H100 80GB

NVIDIA · Data Center

NVIDIA H100 80GB

$22000$30000 MSRP

The NVIDIA H100 is the GPU that trained most of the world's leading AI models. With 80GB of HBM3 at 3.35 TB/s bandwidth, it's in a completely different class from consumer hardware. FP8 Transformer Engine provides massive inference speedups. Used H100s are becoming available as companies refresh to Blackwell, but still cost $20k+. Requires server infrastructure — no display output, no consumer drivers.

Best ForThe fastest AI GPU in existence — for those with the budget and infrastructure
VerdictIf you're asking whether you need an H100, you probably don't. But nothing else comes close.
AI
10/10
Gaming
1/10

Specifications

VRAM80GB HBM3
Memory Bandwidth3350 GB/s
CUDA Cores14,592
Boost Clock1845 MHz
TDP350W
Power Connector1x 8-pin
Length267mm
Form FactorDual Slot
Release Year2023

AI Capabilities

Unrivaled80GB VRAM

Run 70B+ models, no compromises. The AI power user's dream.

Can run (Q4 quantized)

Llama 3.1 70BLlama 3.1 8BQwen 2.5 72BQwen 2.5 32BQwen 2.5 14BMistral 7BDeepSeek R1 70BFLUX.1 DevStable Diffusion XLStable Diffusion 3.5 LargeHunyuanVideoCogVideoX-5BMochi 1LTX VideoStable Video DiffusionWan Video 14BCodestral 22BQwen 2.5 Coder 32BLLaVA 1.6 34BAlphaFold 2ESMFold (ESM-2 15B)ESM-2 3BscGPTRFdiffusionFine-tune Llama 8BFine-tune Llama 70BTrain SDXL LoRATrain FLUX LoRA

Recommended system RAM for AI: 160GB+ (2x GPU VRAM for model overflow)

Pros

  • +80GB HBM3 at 3.35 TB/s — fastest memory available
  • +FP8 Transformer Engine
  • +NVLink for multi-GPU scaling
  • +Runs any model unquantized

Cons

  • -$22,000+ used
  • -Requires server chassis
  • -No display output
  • -No gaming whatsoever
  • -Power and cooling requirements
aiworkstation

Where to Buy