Can NVIDIA GeForce RTX 3080 10GB run Qwen 2.5 72B?
72B parameter LLM model on 10GB GDDR6X
No — not enough VRAM
SpeedWill not load
QualityN/A
VRAM Requirements
Qwen 2.5 72B is a 72B parameter model. At full precision (FP16), it requires 144GB of VRAM. Your NVIDIA GeForce RTX 3080 10GB only has 10GB — not enough even at maximum compression.
FP16 (Full Precision)144GB (need 134GB more)
Maximum quality, no quantization
Q8 (8-bit)72GB (need 62GB more)
Near-lossless, ~50% size reduction
Q4 (4-bit)42GB (need 32GB more)
Good quality, ~75% size reduction
Your GPU VRAM: 10GB GDDR6X at 760 GB/s bandwidth
Recommended system RAM: 32GB DDR5 (2x GPU VRAM minimum for model overflow)
Recommended system RAM: 32GB DDR5 (2x GPU VRAM minimum for model overflow)
NVIDIA GeForce RTX 3080 10GB Specs
VRAM10GB GDDR6X
Memory Bandwidth760 GB/s
TDP320W
CUDA Cores8,704
Street Price~$450
AI Rating4/10
Other GPUs That Run Qwen 2.5 72B
Other LLM Models on NVIDIA GeForce RTX 3080 10GB
About Qwen 2.5 72B
Top open LLM for reasoning. Similar requirements to Llama 70B.
Category: LLM · Parameters: 72B · CUDA required: No (runs via llama.cpp/GGUF)