Can NVIDIA GeForce RTX 4080 run Llama 3.1 70B?
70B parameter LLM model on 16GB GDDR6X
No — not enough VRAM
SpeedWill not load
QualityN/A
VRAM Requirements
Llama 3.1 70B is a 70B parameter model. At full precision (FP16), it requires 140GB of VRAM. Your NVIDIA GeForce RTX 4080 only has 16GB — not enough even at maximum compression.
FP16 (Full Precision)140GB (need 124GB more)
Maximum quality, no quantization
Q8 (8-bit)70GB (need 54GB more)
Near-lossless, ~50% size reduction
Q4 (4-bit)40GB (need 24GB more)
Good quality, ~75% size reduction
Your GPU VRAM: 16GB GDDR6X at 717 GB/s bandwidth
Recommended system RAM: 32GB DDR5 (2x GPU VRAM minimum for model overflow)
Recommended system RAM: 32GB DDR5 (2x GPU VRAM minimum for model overflow)
NVIDIA GeForce RTX 4080 Specs
VRAM16GB GDDR6X
Memory Bandwidth717 GB/s
TDP320W
CUDA Cores9,728
Street Price~$850
AI Rating7/10
Other GPUs That Run Llama 3.1 70B
Other LLM Models on NVIDIA GeForce RTX 4080
About Llama 3.1 70B
Frontier-class open LLM. Q4 fits on dual 24GB GPUs or a single 48GB card.
Category: LLM · Parameters: 70B · CUDA required: No (runs via llama.cpp/GGUF)