Can NVIDIA GeForce RTX 3090 Ti run FLUX.1 Dev?
12B parameter Image Gen model on 24GB GDDR6X
VRAM Requirements
FLUX.1 Dev is a 12B parameter model. At full precision (FP16), it requires 32GB of VRAM. Your NVIDIA GeForce RTX 3090 Ti has 24GB, so you'll need to quantize it to 8-bit (Q8) to fit.
Maximum quality, no quantization
Near-lossless, ~50% size reduction
Good quality, ~75% size reduction
Recommended system RAM: 48GB DDR5 (2x GPU VRAM minimum for model overflow)
What This Means in Practice
FLUX.1 Dev at 8-bit precision on NVIDIA GeForce RTX 3090 Ti produces images virtually identical to full precision. Generation speed is fast and you'll have some VRAM headroom for larger batch sizes or higher resolutions.
How to Set It Up
Step 1: Install ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git && cd ComfyUI && pip install -r requirements.txtComfyUI is the recommended UI for Stable Diffusion and FLUX models.
Step 2: Download the model
Download FLUX.1 Dev weights from HuggingFace and place them in ComfyUI/models/. The model is approximately 32GB at full precision.
Step 3: Launch and generate
python main.pyOpen http://localhost:8188 in your browser. You can use the full precision weights.
NVIDIA GeForce RTX 3090 Ti Specs
Other GPUs That Run FLUX.1 Dev
Other Image Gen Models on NVIDIA GeForce RTX 3090 Ti
About FLUX.1 Dev
State-of-the-art image generation. 16GB comfortable at FP8.