Can Intel Arc B580 run Codestral 22B?
22B parameter Code model on 12GB GDDR6
Barely — requires CPU/RAM offloading
~1-3 tok/s (offload)
SpeedVery slow — expect 1-3 tokens/sec
QualityQuality is fine but the speed makes it impractical for interactive use
Intel GPUs lack CUDA. While Codestral 22B can technically run via llama.cpp/GGUF, the setup is more complex and less optimized than on NVIDIA hardware.
VRAM Requirements
Codestral 22B is a 22B parameter model. At full precision (FP16), it requires 44GB of VRAM. Your Intel Arc B580 only has 12GB — not enough even at maximum compression.
FP16 (Full Precision)44GB (need 32GB more)
Maximum quality, no quantization
Q8 (8-bit)22GB (need 10GB more)
Near-lossless, ~50% size reduction
Q4 (4-bit)13GB (need 1GB more)
Good quality, ~75% size reduction
Your GPU VRAM: 12GB GDDR6 at 456 GB/s bandwidth
Recommended system RAM: 32GB DDR5 (2x GPU VRAM minimum for model overflow)
Recommended system RAM: 32GB DDR5 (2x GPU VRAM minimum for model overflow)
How to Set It Up
Step 1: Install Ollama
curl -fsSL https://ollama.com/install.sh | shOllama is the easiest way to run local LLMs. Works on Linux, macOS, and Windows.
Step 2: Download and run Codestral 22B
ollama run codestral:22b:q4_K_MThis downloads the model (~22GB). First run takes a few minutes.
Step 3: Verify GPU is being used
rocm-smiCheck that VRAM usage increases when the model loads. You should see ~13GB used.
Intel Arc B580 Specs
VRAM12GB GDDR6
Memory Bandwidth456 GB/s
TDP150W
CUDA CoresN/A
Street Price~$230
AI Rating2/10
Other GPUs That Run Codestral 22B
Other Code Models on Intel Arc B580
About Codestral 22B
Top code completion model. Q4 fits on 16GB GPUs.
Category: Code · Parameters: 22B · CUDA required: No (runs via llama.cpp/GGUF)