← Back to GPUs

AMD · RX 9000
AMD Radeon RX 9070 XT
$580$549 MSRP
The AMD Radeon RX 9070 XT is AMD's flagship RDNA 4 gaming GPU with 16GB of GDDR6 memory. It competes directly with the RTX 4080 in gaming performance while costing significantly less. For pure gaming, it is an excellent value. However, the lack of CUDA means it is not recommended for AI workloads — ROCm support is improving but most AI tutorials and frameworks assume NVIDIA hardware.
Best ForHigh-end gaming at the best price-to-performance ratio
VerdictFantastic gaming value, but go NVIDIA if AI is part of your plans.
AI
4/10
Gaming
8/10
Specifications
VRAM16GB GDDR6
Memory Bandwidth650 GB/s
Stream Processors4,096
Boost Clock2750 MHz
TDP300W
Power Connector2x 8-pin
Length277mm
Form FactorDual Slot
Release Year2025
AI Capabilities
Capable16GB VRAM
Runs most popular models with quantization. The minimum for serious AI work.
No CUDA — most AI frameworks run best on NVIDIA. ROCm support is improving but not all models/tools work.
Can run (Q4 quantized)
Llama 3.1 8BQwen 2.5 14BMistral 7BFLUX.1 DevStable Diffusion XLStable Diffusion 3.5 LargeHunyuanVideoCogVideoX-5BMochi 1LTX VideoStable Video DiffusionWan Video 14BCodestral 22BAlphaFold 2ESMFold (ESM-2 15B)ESM-2 3BscGPTRFdiffusionFine-tune Llama 8BTrain SDXL LoRATrain FLUX LoRA
Tight fit (may need CPU offload)
Qwen 2.5 32B (20GB Q4)Qwen 2.5 Coder 32B (20GB Q4)LLaVA 1.6 34B (20GB Q4)
Recommended system RAM for AI: 32GB+ (2x GPU VRAM for model overflow)
Performance Estimates
Estimated tokens/sec for LLM inference based on 650 GB/s memory bandwidth — not hardware benchmarks. Methodology · What is Q4/Q8?
Llama 3.1 8B8B
FP16~16-20 tok/sUsableQwen 2.5 32B32B
Offload~1-3 tok/sVery slowQwen 2.5 14B14B
Q8~19-24 tok/sUsableMistral 7B7B
FP16~18-23 tok/sUsableCodestral 22B22B
Q4~22-27 tok/sUsableQwen 2.5 Coder 32B32B
Offload~1-3 tok/sVery slowPros
- +Competitive with RTX 4080
- +16GB VRAM
- +Good open-source driver support
Cons
- -Weaker AI/ML ecosystem than NVIDIA
- -No CUDA
- -FSR behind DLSS
gaming
Will It Run?
Llama 3.1 8B8B
FP16Qwen 2.5 32B32B
OffloadQwen 2.5 14B14B
Q8Mistral 7B7B
FP16FLUX.1 Dev12B
Q8Stable Diffusion XL6.6B
FP16Stable Diffusion 3.5 Large8B
Q8HunyuanVideo13B
Q4