Can NVIDIA A100 40GB run DeepSeek R1 70B?
70B parameter LLM model on 40GB HBM2e
VRAM Requirements
DeepSeek R1 70B is a 70B parameter model. At full precision (FP16), it requires 140GB of VRAM. Your NVIDIA A100 40GB has 40GB, so you'll need to quantize it to 4-bit (Q4) to fit.
Maximum quality, no quantization
Near-lossless, ~50% size reduction
Good quality, ~75% size reduction
Recommended system RAM: 80GB DDR5 (2x GPU VRAM minimum for model overflow)
What This Means in Practice
At 4-bit quantization, DeepSeek R1 70B fits in NVIDIA A100 40GB's 40GB VRAM but with some quality trade-offs. Complex reasoning tasks and nuanced writing may show slight degradation. For casual chat, code assistance, and general queries, Q4 is perfectly usable. For critical work, consider a GPU with more VRAM to run at Q8.
How to Set It Up
Step 1: Install Ollama
curl -fsSL https://ollama.com/install.sh | shOllama is the easiest way to run local LLMs. Works on Linux, macOS, and Windows.
Step 2: Download and run DeepSeek R1 70B
ollama run deepseek-r1:70b:q4_K_MThis downloads the Q4_K_M quantized version (~40GB). First run takes a few minutes to download.
Step 3: Verify GPU is being used
nvidia-smiCheck that VRAM usage increases when the model loads. You should see ~40GB used.
NVIDIA A100 40GB Specs
Other GPUs That Run DeepSeek R1 70B
Other LLM Models on NVIDIA A100 40GB
About DeepSeek R1 70B
Strong reasoning model. Same tier as Llama 70B for hardware requirements.