← Back to GPUs

NVIDIA · RTX 30
NVIDIA GeForce RTX 3080 10GB
$450$699 MSRP
The NVIDIA GeForce RTX 3080 10GB was a strong gaming card in its generation with 10GB of GDDR6X. In 2026, the 10GB VRAM is limiting for both modern games and AI. For AI, you can run 7B models but anything larger requires heavy quantization. Good used prices, but the 12GB RTX 3060 or used 3090 are usually better AI value picks.
Best ForUsed market 1440p gaming on a budget
Verdict10GB is tight in 2026 — look at 3060 12GB or 3090 24GB for AI.
AI
4/10
Gaming
7/10
Specifications
VRAM10GB GDDR6X
Memory Bandwidth760 GB/s
CUDA Cores8,704
Boost Clock1710 MHz
TDP320W
Power Connector2x 8-pin
Length285mm
Form FactorDual Slot
Release Year2020
AI Capabilities
Entry Level10GB VRAM
Limited to small models with heavy quantization. Fine for experimenting.
Can run (Q4 quantized)
Llama 3.1 8BQwen 2.5 14BMistral 7BFLUX.1 DevStable Diffusion XLStable Diffusion 3.5 LargeCogVideoX-5BMochi 1LTX VideoStable Video DiffusionAlphaFold 2ESMFold (ESM-2 15B)ESM-2 3BscGPTRFdiffusionFine-tune Llama 8BTrain SDXL LoRA
Tight fit (may need CPU offload)
HunyuanVideo (14GB Q4)Wan Video 14B (11GB Q4)Codestral 22B (13GB Q4)
Recommended system RAM for AI: 20GB+ (2x GPU VRAM for model overflow)
Performance Estimates
Estimated tokens/sec for LLM inference based on 760 GB/s memory bandwidth — not hardware benchmarks. Methodology · What is Q4/Q8?
Llama 3.1 8B8B
Q8~49-61 tok/sFastQwen 2.5 14B14B
Q4~46-57 tok/sFastMistral 7B7B
Q8~56-69 tok/sFastCodestral 22B22B
Offload~1-3 tok/sVery slowPros
- +Great used prices
- +Still solid for 1440p
- +CUDA support
Cons
- -Only 10GB VRAM
- -Older gen
- -High power for performance
gamingbudget
Where to Buy
Will It Run?
Llama 3.1 8B8B
Q8Qwen 2.5 14B14B
Q4Mistral 7B7B
Q8FLUX.1 Dev12B
Q4Stable Diffusion XL6.6B
Q8Stable Diffusion 3.5 Large8B
Q8HunyuanVideo13B
OffloadCogVideoX-5B5B
Q8