Can NVIDIA GeForce RTX 3090 Ti run Codestral 22B?

22B parameter Code model on 24GB GDDR6X

Yes — runs at 8-bit quantization
~24-29 tok/sUsable
SpeedFast inference, near-native speed
QualityNear-lossless — virtually identical to FP16

VRAM Requirements

Codestral 22B is a 22B parameter model. At full precision (FP16), it requires 44GB of VRAM. Your NVIDIA GeForce RTX 3090 Ti has 24GB, so you'll need to quantize it to 8-bit (Q8) to fit.

FP16 (Full Precision)44GB (need 20GB more)

Maximum quality, no quantization

Q8 (8-bit)22GB (2GB free)

Near-lossless, ~50% size reduction

Q4 (4-bit)13GB (11GB free)

Good quality, ~75% size reduction

Your GPU VRAM: 24GB GDDR6X at 1008 GB/s bandwidth
Recommended system RAM: 48GB DDR5 (2x GPU VRAM minimum for model overflow)

What This Means in Practice

Codestral 22B at this precision on NVIDIA GeForce RTX 3090 Ti gives excellent code completion and generation. Fast enough for real-time IDE integration. Handles complex refactoring, multi-file edits, and long-context code understanding.

How to Set It Up

Step 1: Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Ollama is the easiest way to run local LLMs. Works on Linux, macOS, and Windows.

Step 2: Download and run Codestral 22B

ollama run codestral:22b

This downloads the model (~22GB). First run takes a few minutes.

Step 3: Verify GPU is being used

nvidia-smi

Check that VRAM usage increases when the model loads. You should see ~22GB used.

NVIDIA GeForce RTX 3090 Ti Specs

VRAM24GB GDDR6X
Memory Bandwidth1008 GB/s
TDP450W
CUDA Cores10,752
Street Price~$1000
AI Rating8/10

Other Code Models on NVIDIA GeForce RTX 3090 Ti

About Codestral 22B

Top code completion model. Q4 fits on 16GB GPUs.

Category: Code · Parameters: 22B · CUDA required: No (runs via llama.cpp/GGUF)