← Back to GPUs

AMD · RX 6000
AMD Radeon RX 6900 XT
$400$999 MSRP
The RX 6900 XT was AMD's previous flagship with 16GB VRAM and strong rasterization. Used prices around $400 make it a solid gaming value. For AI, the 16GB VRAM looks attractive but without CUDA support, it's gaming-only. ROCm support exists but is unreliable for this GPU generation.
Best ForUsed market gaming with 16GB VRAM at AMD value
VerdictGreat used gaming card, but don't buy it expecting to run AI models.
AI
2/10
Gaming
7/10
Specifications
VRAM16GB GDDR6
Memory Bandwidth512 GB/s
Stream Processors5,120
Boost Clock2250 MHz
TDP300W
Power Connector2x 8-pin
Length267mm
Form FactorTriple Slot
Release Year2020
AI Capabilities
Capable16GB VRAM
Runs most popular models with quantization. The minimum for serious AI work.
No CUDA — most AI frameworks run best on NVIDIA. ROCm support is improving but not all models/tools work.
Can run (Q4 quantized)
Llama 3.1 8BQwen 2.5 14BMistral 7BFLUX.1 DevStable Diffusion XLStable Diffusion 3.5 LargeHunyuanVideoCogVideoX-5BMochi 1LTX VideoStable Video DiffusionWan Video 14BCodestral 22BAlphaFold 2ESMFold (ESM-2 15B)ESM-2 3BscGPTRFdiffusionFine-tune Llama 8BTrain SDXL LoRATrain FLUX LoRA
Tight fit (may need CPU offload)
Qwen 2.5 32B (20GB Q4)Qwen 2.5 Coder 32B (20GB Q4)LLaVA 1.6 34B (20GB Q4)
Recommended system RAM for AI: 32GB+ (2x GPU VRAM for model overflow)
Performance Estimates
Estimated tokens/sec for LLM inference based on 512 GB/s memory bandwidth — not hardware benchmarks. Methodology · What is Q4/Q8?
Llama 3.1 8B8B
FP16~10-12 tok/sSlowQwen 2.5 32B32B
Offload~1-3 tok/sVery slowQwen 2.5 14B14B
Q8~12-15 tok/sSlowMistral 7B7B
FP16~11-14 tok/sSlowCodestral 22B22B
Q4~13-17 tok/sSlowQwen 2.5 Coder 32B32B
Offload~1-3 tok/sVery slowPros
- +16GB VRAM
- +Strong rasterization at used prices
- +No adapter cables needed
Cons
- -No CUDA for AI
- -Old RDNA 2 architecture
- -Weak ray tracing
- -High power draw
gamingbudget