← Back to GPUs
NVIDIA · Data Center
NVIDIA A100 80GB
$8000$15000 MSRP
The NVIDIA A100 80GB is the data center GPU that powered the AI revolution. With 80GB of HBM2e memory at over 2 TB/s bandwidth, it runs any consumer LLM completely unquantized — including 70B models at full FP16 precision. Originally ,000+, used A100s are now available for around ,000. They require a server chassis or PCIe adapter and have no display output. For AI builders with the budget and technical skill, a used A100 offers unmatched VRAM capacity.
Best ForRunning the largest AI models with zero compromises on quality
VerdictThe ultimate AI GPU if you can handle the server-grade form factor.
AI
10/10
Gaming
1/10
Specifications
VRAM80GB HBM2e
Memory Bandwidth2039 GB/s
CUDA Cores6,912
Boost Clock1410 MHz
TDP300W
Power Connector1x 8-pin
Length267mm
Form FactorDual Slot
Release Year2021
AI Capabilities
Unrivaled80GB VRAM
Run 70B+ models, no compromises. The AI power user's dream.
Can run (Q4 quantized)
Llama 3.1 70BLlama 3.1 8BQwen 2.5 72BQwen 2.5 32BQwen 2.5 14BMistral 7BDeepSeek R1 70BFLUX.1 DevStable Diffusion XLStable Diffusion 3.5 LargeHunyuanVideoCogVideoX-5BMochi 1LTX VideoStable Video DiffusionWan Video 14BCodestral 22BQwen 2.5 Coder 32BLLaVA 1.6 34BAlphaFold 2ESMFold (ESM-2 15B)ESM-2 3BscGPTRFdiffusionFine-tune Llama 8BFine-tune Llama 70BTrain SDXL LoRATrain FLUX LoRA
Recommended system RAM for AI: 160GB+ (2x GPU VRAM for model overflow)
Performance Estimates
Estimated tokens/sec for LLM inference based on 2039 GB/s memory bandwidth — not hardware benchmarks. Methodology · What is Q4/Q8?
Llama 3.1 70B70B
Q8~18-23 tok/sUsableLlama 3.1 8B8B
FP16~76-94 tok/sExcellentQwen 2.5 72B72B
Q8~18-22 tok/sUsableQwen 2.5 32B32B
FP16~19-23 tok/sUsableQwen 2.5 14B14B
FP16~43-54 tok/sFastMistral 7B7B
FP16~87-107 tok/sExcellentDeepSeek R1 70B70B
Q8~18-23 tok/sUsableCodestral 22B22B
FP16~28-34 tok/sUsableQwen 2.5 Coder 32B32B
FP16~19-23 tok/sUsablePros
- +80GB HBM2e — run any model unquantized
- +2 TB/s bandwidth
- +NVLink for multi-GPU scaling
- +Available used for ~$8k
Cons
- -No display output on most models
- -Requires server chassis or adapter
- -No gaming drivers
- -High used market prices
aiworkstation
Where to Buy
Will It Run?
Llama 3.1 70B70B
Q8Llama 3.1 8B8B
FP16Qwen 2.5 72B72B
Q8Qwen 2.5 32B32B
FP16Qwen 2.5 14B14B
FP16Mistral 7B7B
FP16DeepSeek R1 70B70B
Q8FLUX.1 Dev12B
FP16