Can AMD Radeon RX 6900 XT run Codestral 22B?

22B parameter Code model on 16GB GDDR6

Yes — runs at 4-bit quantization
~13-17 tok/sSlow
SpeedModerate speed, usable for interactive chat
QualityGood quality with slight degradation on complex reasoning
AMD GPUs lack CUDA. While Codestral 22B can technically run via llama.cpp/GGUF, the setup is more complex and less optimized than on NVIDIA hardware.

VRAM Requirements

Codestral 22B is a 22B parameter model. At full precision (FP16), it requires 44GB of VRAM. Your AMD Radeon RX 6900 XT has 16GB, so you'll need to quantize it to 4-bit (Q4) to fit.

FP16 (Full Precision)44GB (need 28GB more)

Maximum quality, no quantization

Q8 (8-bit)22GB (need 6GB more)

Near-lossless, ~50% size reduction

Q4 (4-bit)13GB (3GB free)

Good quality, ~75% size reduction

Your GPU VRAM: 16GB GDDR6 at 512 GB/s bandwidth
Recommended system RAM: 32GB DDR5 (2x GPU VRAM minimum for model overflow)

What This Means in Practice

Codestral 22B at Q4 on AMD Radeon RX 6900 XT works for code completion but complex multi-file operations may show quality drops. Still very usable for day-to-day coding assistance. Consider a larger VRAM GPU for professional code generation workflows.

How to Set It Up

Step 1: Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Ollama is the easiest way to run local LLMs. Works on Linux, macOS, and Windows.

Step 2: Download and run Codestral 22B

ollama run codestral:22b:q4_K_M

This downloads the Q4_K_M quantized version (~13GB). First run takes a few minutes to download.

Step 3: Verify GPU is being used

rocm-smi

Check that VRAM usage increases when the model loads. You should see ~13GB used.

AMD Radeon RX 6900 XT Specs

VRAM16GB GDDR6
Memory Bandwidth512 GB/s
TDP300W
CUDA CoresN/A
Street Price~$400
AI Rating2/10

Other Code Models on AMD Radeon RX 6900 XT

About Codestral 22B

Top code completion model. Q4 fits on 16GB GPUs.

Category: Code · Parameters: 22B · CUDA required: No (runs via llama.cpp/GGUF)