ComfyUI Common Issues & Troubleshooting Guide
A comprehensive reference for fixing the most frequent problems in ComfyUI—from installation failures to generation errors, with step-by-step solutions for beginners and advanced users.
We cover:
- Installation/launch failures
- Model loading/compatibility issues
- Workflow execution errors
- Generation quality problems
- Performance/resource limits
- Deployment (SSG/hosting) issues
- Plugin/extension conflicts
Quick Troubleshooting Checklist (5-Minute Fixes)
Before diving into detailed solutions, try these quick checks—they resolve 80% of common issues:
- Restart ComfyUI (many changes require a restart to take effect).
- Verify model files are in the correct directory (e.g., LoRAs →
models/lora/, checkpoints →models/checkpoints/). - Update ComfyUI to the latest version (run
git pullin your ComfyUI folder or use ComfyUI Manager). - Free up VRAM: Close other GPU-intensive apps (games, video editors) or reduce image resolution.
- Check node connections: Ensure all required inputs (e.g.,
model,conditioning,image) are linked.
1. Installation & Launch Issues
Issue 1: ComfyUI Fails to Launch (No Window/Command Line Errors)
Symptoms: Double-clicking run_nvidia_gpu.bat (Windows) or run.sh (macOS/Linux) does nothing, or the terminal closes immediately.
Causes:
- Missing Python 3.10+ (ComfyUI requires Python 3.10–3.11; 3.12+ is unsupported).
- GPU driver outdated (NVIDIA/AMD drivers incompatible with PyTorch).
- Corrupted dependencies (missing or broken Python packages).
Solutions:
- Install Python 3.10.11 (recommended): Python.org/downloads/release/python-31011/ (check "Add Python to PATH" during installation).
- Update GPU drivers:
- NVIDIA: Use GeForce Experience or NVIDIA Driver Download.
- AMD: Use Radeon Software or AMD Driver Download.
- Repair dependencies:
# Navigate to your ComfyUI folder cd path/to/ComfyUI # Reinstall requirements pip install --upgrade -r requirements.txt
Issue 2: "CUDA Out of Memory" on Launch (Not During Generation)
Symptoms: ComfyUI crashes immediately with "CUDA out of memory" even before loading a workflow.
Causes:
- Multiple instances of ComfyUI running in the background.
- GPU VRAM reserved by other apps (e.g., Discord hardware acceleration, Chrome GPU tasks).
Solutions:
- Close all background apps using GPU:
- Windows: Open Task Manager → "Performance" → "GPU" → End tasks using significant VRAM.
- macOS/Linux: Use
htop(Linux) or Activity Monitor (macOS) to kill GPU-heavy processes.
- Launch ComfyUI in Low VRAM Mode:
- Edit
run_nvidia_gpu.bat(Windows) orrun.sh(Linux/macOS). - Add
--lowvramto the launch command:python main.py --lowvram
- Edit
Issue 3:"download fp8-version of z-image-turbo"
Solutions:
- Download the fp8-version of z-image-turbo model:
- Go to the ComfyUI Model Repository.
- Download the model files (e.g.,
z-image-turbo-fp8-e4m3fn.safetensors).
- Place the model in the
models/diffusion_modelsfolder:
2. Model Loading & Compatibility Issues
Issue 1: Model Not Appearing in ComfyUI (Checkpoints/LoRAs/ControlNets)
Symptoms: Downloaded models don’t show up in node dropdowns (e.g., Load Checkpoint → no new model).
Causes:
- Model in the wrong directory (e.g., LoRA in
models/checkpoints/). - Unsupported file format (e.g.,
.ckptcorrupted,.safetensorswith incorrect extension). - ComfyUI didn’t scan the model folder (requires restart).
Solutions:
- Verify model directory (critical!):
Model Type Correct Folder Supported Formats Checkpoints models/checkpoints/.ckpt,.safetensorsLoRAs models/lora/.ckpt,.safetensorsControlNets models/controlnet/.pth,.safetensorsEmbeddings models/embeddings/.bin,.ptVAEs models/vae/.ckpt,.safetensors - Rename corrupted/incorrect files:
- Ensure extensions are lowercase (e.g.,
.safetensorsnot.Safetensors). - Remove special characters from filenames (e.g.,
my-lora.safetensorsinstead ofmy#lora!v2.safetensors).
- Ensure extensions are lowercase (e.g.,
- Restart ComfyUI (it only scans model folders on launch).
Issue 2: "Model Load Failed" Error (Corrupted/Incompatible Models)
Symptoms: ComfyUI throws "Failed to load model" when selecting a checkpoint/LoRA.
Causes:
- Corrupted download (incomplete file due to network issues).
- Model incompatible with your base model (e.g., SDXL model used with SD 1.5 workflow).
- Quantized model missing dependencies (e.g., GGUF models require
llama-cpp-python).
Solutions:
- Re-download the model:
- Use a download manager (e.g., IDM, wget) to avoid corruption.
- Verify file size matches the source (Civitai/Hugging Face lists expected sizes).
- Check compatibility:
- SD 1.5 models → Use with SD 1.5 checkpoints (not SDXL).
- SDXL models → Require SDXL-compatible nodes (e.g.,
CLIP Text Encode (SDXL)).
- For quantized models (GGUF/INT4):
# Install required dependencies pip install llama-cpp-python accelerate
Issue 3: LoRA/ControlNet Has No Effect on Generation
Symptoms: Model loads successfully, but generation output doesn’t reflect the LoRA/ControlNet.
Causes:
- Missing trigger word (LoRAs require specific keywords, e.g.,
my-custom-lorain prompts). - Incorrect node connections (LoRA output not linked to sampler).
- Strength set to 0 (default strength for LoRAs/ControlNets may be 0).
Solutions:
- Add the LoRA trigger word:
- Check the model’s Civitai page for required trigger words (e.g.,
(my-custom-lora:1.2)in prompts).
- Check the model’s Civitai page for required trigger words (e.g.,
- Verify node connections:
- LoRA: Connect
Load LoRA→modeloutput toKSampler→modelinput. - ControlNet: Connect
Load ControlNet→control_netoutput toKSampler→control_netinput.
- LoRA: Connect
- Adjust strength:
- Set LoRA strength to 0.5–1.0 (too high = distortion, too low = no effect).
- Set ControlNet strength to 0.7–1.0 (lower for subtle effects).
3. Workflow Execution Errors
Issue 1: "Missing Input" Error (Node Connection Issues)
Symptoms: "Error: Missing input 'model' for node KSampler" or similar.
Causes:
- Required node inputs are not connected (e.g.,
KSamplermissingmodelorconditioning). - Disconnected nodes (accidental click/drag broke a link).
Solutions:
- Use ComfyUI’s "Validate Workflow" tool:
- Click the
Validatebutton (top-right, checkmark icon) to highlight missing connections.
- Click the
- Reconnect core nodes (basic workflow example):
Load Checkpoint→model→KSampler→modelLoad Checkpoint→clip→CLIP Text Encode→clipCLIP Text Encode→conditioning→KSampler→positiveKSampler→image→Save Image→image
Issue 2: Workflow Queues but No Image Generates
Symptoms: Queue prompt shows "Running" but no output, or output/ folder is empty.
Causes:
Save Imagenode has no valid output path.- Empty positive prompt (AI generates black images with no instructions).
- KSampler steps set to 0 (accidental configuration).
Solutions:
- Check
Save Imagenode:- Ensure
output_pathis set to./output/(default) or a valid folder. - Verify the node is connected to
KSampler→imageinput.
- Ensure
- Add a positive prompt:
- Avoid empty
CLIP Text Encodenodes (e.g., add "photorealistic cat" as a test).
- Avoid empty
- Reset KSampler settings:
- Set
stepsto 20–50 (default: 25) andcfgto 7–10 (default: 8).
- Set
Issue 3: "Type Error" (Incompatible Node Types)
Symptoms: "TypeError: Cannot convert 'ControlNet' to 'Model'" or similar.
Causes:
- Connecting incompatible node outputs (e.g., ControlNet to
KSampler→modelinput). - Using outdated nodes (e.g., SD 1.5 nodes with SDXL models).
Solutions:
- Match node types to inputs:
KSampler→modelinput requires a checkpoint model (fromLoad Checkpoint), not LoRA/ControlNet.KSampler→control_netinput requires a ControlNet model (fromLoad ControlNet).
- Use model-specific nodes:
- SDXL: Use
CLIP Text Encode (SDXL)(two text inputs) instead of the standardCLIP Text Encode. - FLUX: Use
FLUX Samplerinstead ofKSampler.
- SDXL: Use
4. Generation Quality & Output Issues
Issue 1: Black/Blank Images Generated
Symptoms: Output is a solid black image or empty canvas.
Causes:
- Empty positive prompt (AI has no instructions).
- Overly strict negative prompts (e.g., "all, everything, image" blocks generation).
- Model corruption (checkpoint/LoRA is broken).
Solutions:
- Test with a simple positive prompt:
- Use "a red apple on a white background" (avoids ambiguity).
- Simplify negative prompts:
- Remove overbroad terms (e.g., keep only "blurry, low quality, deformed").
- Switch to a known-good model:
- Use ComfyUI’s default checkpoint (e.g.,
sd_xl_base_1.0.safetensors) to rule out model issues.
- Use ComfyUI’s default checkpoint (e.g.,
Issue 2: Blurry/Low-Quality Output
Symptoms: Images are grainy, pixelated, or lack detail.
Causes:
- Low image resolution (e.g., 256x256).
- Insufficient KSampler steps (e.g., <10 steps).
- Model mismatch (e.g., low-detail model used for photorealism).
Solutions:
- Increase resolution:
- Set
KSampler→width/heightto 1024x1024 (SD 1.5) or 1024x1024–2048x2048 (SDXL).
- Set
- Adjust KSampler settings:
- Set
stepsto 30–50 (more steps = more detail). - Use a high-quality sampler (e.g.,
DPM++ 2M KarrasorEuler afor faster results).
- Set
- Use a detail-focused model:
- Switch to checkpoints like
Realistic Vision(photorealism) orDreamShaper(general quality).
- Switch to checkpoints like
Issue 3: Prompt Not Being Followed (AI Ignores Instructions)
Symptoms: Output doesn’t match the prompt (e.g., "blue car" generates a red car).
Causes:
- Vague prompts (lack of specific details).
- Overweighted keywords (distorts the model’s focus).
- Model bias (some models prioritize certain styles over prompts).
Solutions:
- Refine prompts with specific details:
- ❌ Weak: "blue car" → ✅ Strong: "2024 Tesla Model 3, deep blue metallic paint, sunny day, photorealistic".
- Adjust keyword weighting:
- Avoid excessive parentheses (e.g.,
((((blue car)))))→ use(blue car:1.2)for mild emphasis).
- Avoid excessive parentheses (e.g.,
- Use a prompt-friendly model:
- SDXL models (e.g.,
sd_xl_base_1.0) follow prompts more accurately than SD 1.5.
- SDXL models (e.g.,
5. Performance & Resource Issues
Issue 1: "Out of Memory (OOM)" Error (VRAM Limits)
Symptoms: "CUDA out of memory" during generation (most common with large models/resolutions).
Causes:
- Image resolution too high (e.g., 4096x4096 on 8GB VRAM).
- Multiple large models loaded (e.g., SDXL checkpoint + 4 LoRAs + ControlNet).
Solutions:
- Reduce resolution:
- 8GB VRAM: Max 1024x1024 (SD 1.5) or 768x768 (SDXL).
- 12GB VRAM: Max 1536x1536 (SDXL) or 1024x1024 (SD 1.5 + LoRAs).
- Enable model offloading:
- Go to
Settings→Performance→ Check "Enable Model Offloading".
- Go to
- Use quantized models:
- Replace full-precision (FP16) models with INT4/FP8 quantized versions (reduces VRAM usage by 50%).
Issue 2: Slow Generation Speed (Long Wait Times)
Symptoms: Generation takes 5+ minutes for a 1024x1024 image.
Causes:
- CPU-only mode (GPU not being used).
- Outdated GPU drivers.
- Overly high sampler steps/resolution.
Solutions:
- Verify GPU acceleration is enabled:
- Check the ComfyUI terminal for "Using NVIDIA GPU" (Windows) or "CUDA available: True".
- If using CPU: Reinstall GPU drivers and ensure PyTorch is GPU-enabled.
- Optimize settings:
- Reduce KSampler steps to 20–30 (balance of speed/quality).
- Use faster samplers (e.g.,
Euler aorDPM++ SDE Karras).
- Update PyTorch for GPU:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
7. Other Common Issues
Issue 1: ComfyUI Crashes When Using Plugins/Extensions
Symptoms: ComfyUI closes immediately after enabling a custom node (e.g., Hunyuan3D, Z-Image-Turbo).
Causes:
- Incompatible extension (outdated or designed for older ComfyUI versions).
- Missing extension dependencies.
Solutions:
- Disable the problematic extension:
- Navigate to
ComfyUI/custom_nodes/and delete the extension folder.
- Navigate to
- Install extension dependencies:
- Read the extension’s
README.mdfor required packages (e.g.,pip install -r requirements.txt).
- Read the extension’s
- Use ComfyUI Manager to install compatible extensions:
- Avoid manual installs unless the extension is not available in the manager.
Issue 2: Text Rendering Issues (Blurry/Incorrect Text in Images)
Symptoms: AI-generated text (e.g., "SALE" on a poster) is unreadable or distorted.
Causes:
- Most diffusion models are poor at text rendering (SD 1.5/XL struggle with precise typography).
- Overly complex text prompts (e.g., long sentences or custom fonts).
Solutions:
- Use text-focused models:
- Switch to models like
FLUX.1 KontextorTextGen-XL(optimized for text).
- Switch to models like
- Simplify text prompts:
- Use short, uppercase words (e.g., "SALE" instead of "Limited Time Sale 50% Off").
- Post-process text in Photoshop/GIMP:
- Generate the image without text, then add text manually for clarity.
Best Practices to Prevent Common Issues
- Organize Model Folders: Use subfolders (e.g.,
models/lora/character/,models/controlnet/pose/) to avoid path confusion. - Backup Workflows: Save successful workflows (
File → Save) to avoid rebuilding nodes after crashes. - Test Models Before Use: Verify new models with a simple workflow (e.g.,
Load Checkpoint→KSampler→Save Image) before integrating into complex setups. - Regularly Update: Keep ComfyUI and extensions updated (ComfyUI Manager →
Update All). - Monitor VRAM: Use tools like NVIDIA GeForce Experience or
nvitop(Linux) to track VRAM usage and avoid OOM errors.
Where to Get More Help
If you can’t resolve an issue with this guide:
- ComfyUI Discord: Discord.gg/comfyui (community support).
- GitHub Issues: ComfyUI GitHub (bug reports).
- Civitai Forums: Civitai.com/forums (model-specific issues).
- ComfyUI Documentation: ComfyUI Docs (official guides).
When asking for help, include:
- ComfyUI version (check
About→Version). - GPU model and VRAM (e.g., RTX 4070 12GB).
- Screenshot of the error message and workflow.
- Model names/links (e.g., "LoRA from Civitai: https://civitai.com/models/12345").