ComfyUI Common Issues & Troubleshooting Guide

A comprehensive reference for fixing the most frequent problems in ComfyUI—from installation failures to generation errors, with step-by-step solutions for beginners and advanced users.

We cover:

  • Installation/launch failures
  • Model loading/compatibility issues
  • Workflow execution errors
  • Generation quality problems
  • Performance/resource limits
  • Deployment (SSG/hosting) issues
  • Plugin/extension conflicts

Quick Troubleshooting Checklist (5-Minute Fixes)

Before diving into detailed solutions, try these quick checks—they resolve 80% of common issues:

  1. Restart ComfyUI (many changes require a restart to take effect).
  2. Verify model files are in the correct directory (e.g., LoRAs → models/lora/, checkpoints → models/checkpoints/).
  3. Update ComfyUI to the latest version (run git pull in your ComfyUI folder or use ComfyUI Manager).
  4. Free up VRAM: Close other GPU-intensive apps (games, video editors) or reduce image resolution.
  5. Check node connections: Ensure all required inputs (e.g., model, conditioning, image) are linked.

1. Installation & Launch Issues

Issue 1: ComfyUI Fails to Launch (No Window/Command Line Errors)

Symptoms: Double-clicking run_nvidia_gpu.bat (Windows) or run.sh (macOS/Linux) does nothing, or the terminal closes immediately.
Causes:

  • Missing Python 3.10+ (ComfyUI requires Python 3.10–3.11; 3.12+ is unsupported).
  • GPU driver outdated (NVIDIA/AMD drivers incompatible with PyTorch).
  • Corrupted dependencies (missing or broken Python packages).

Solutions:

  1. Install Python 3.10.11 (recommended): Python.org/downloads/release/python-31011/ (check "Add Python to PATH" during installation).
  2. Update GPU drivers:
  3. Repair dependencies:
    # Navigate to your ComfyUI folder
    cd path/to/ComfyUI
    # Reinstall requirements
    pip install --upgrade -r requirements.txt
    

Issue 2: "CUDA Out of Memory" on Launch (Not During Generation)

Symptoms: ComfyUI crashes immediately with "CUDA out of memory" even before loading a workflow.
Causes:

  • Multiple instances of ComfyUI running in the background.
  • GPU VRAM reserved by other apps (e.g., Discord hardware acceleration, Chrome GPU tasks).

Solutions:

  1. Close all background apps using GPU:
    • Windows: Open Task Manager → "Performance" → "GPU" → End tasks using significant VRAM.
    • macOS/Linux: Use htop (Linux) or Activity Monitor (macOS) to kill GPU-heavy processes.
  2. Launch ComfyUI in Low VRAM Mode:
    • Edit run_nvidia_gpu.bat (Windows) or run.sh (Linux/macOS).
    • Add --lowvram to the launch command:
      python main.py --lowvram
      

Issue 3:"download fp8-version of z-image-turbo"

Solutions:

  1. Download the fp8-version of z-image-turbo model:
  2. Place the model in the models/diffusion_models folder:

2. Model Loading & Compatibility Issues

Issue 1: Model Not Appearing in ComfyUI (Checkpoints/LoRAs/ControlNets)

Symptoms: Downloaded models don’t show up in node dropdowns (e.g., Load Checkpoint → no new model).
Causes:

  • Model in the wrong directory (e.g., LoRA in models/checkpoints/).
  • Unsupported file format (e.g., .ckpt corrupted, .safetensors with incorrect extension).
  • ComfyUI didn’t scan the model folder (requires restart).

Solutions:

  1. Verify model directory (critical!):
    Model TypeCorrect FolderSupported Formats
    Checkpointsmodels/checkpoints/.ckpt, .safetensors
    LoRAsmodels/lora/.ckpt, .safetensors
    ControlNetsmodels/controlnet/.pth, .safetensors
    Embeddingsmodels/embeddings/.bin, .pt
    VAEsmodels/vae/.ckpt, .safetensors
  2. Rename corrupted/incorrect files:
    • Ensure extensions are lowercase (e.g., .safetensors not .Safetensors).
    • Remove special characters from filenames (e.g., my-lora.safetensors instead of my#lora!v2.safetensors).
  3. Restart ComfyUI (it only scans model folders on launch).

Issue 2: "Model Load Failed" Error (Corrupted/Incompatible Models)

Symptoms: ComfyUI throws "Failed to load model" when selecting a checkpoint/LoRA.
Causes:

  • Corrupted download (incomplete file due to network issues).
  • Model incompatible with your base model (e.g., SDXL model used with SD 1.5 workflow).
  • Quantized model missing dependencies (e.g., GGUF models require llama-cpp-python).

Solutions:

  1. Re-download the model:
    • Use a download manager (e.g., IDM, wget) to avoid corruption.
    • Verify file size matches the source (Civitai/Hugging Face lists expected sizes).
  2. Check compatibility:
    • SD 1.5 models → Use with SD 1.5 checkpoints (not SDXL).
    • SDXL models → Require SDXL-compatible nodes (e.g., CLIP Text Encode (SDXL)).
  3. For quantized models (GGUF/INT4):
    # Install required dependencies
    pip install llama-cpp-python accelerate
    

Issue 3: LoRA/ControlNet Has No Effect on Generation

Symptoms: Model loads successfully, but generation output doesn’t reflect the LoRA/ControlNet.
Causes:

  • Missing trigger word (LoRAs require specific keywords, e.g., my-custom-lora in prompts).
  • Incorrect node connections (LoRA output not linked to sampler).
  • Strength set to 0 (default strength for LoRAs/ControlNets may be 0).

Solutions:

  1. Add the LoRA trigger word:
    • Check the model’s Civitai page for required trigger words (e.g., (my-custom-lora:1.2) in prompts).
  2. Verify node connections:
    • LoRA: Connect Load LoRAmodel output to KSamplermodel input.
    • ControlNet: Connect Load ControlNetcontrol_net output to KSamplercontrol_net input.
  3. Adjust strength:
    • Set LoRA strength to 0.5–1.0 (too high = distortion, too low = no effect).
    • Set ControlNet strength to 0.7–1.0 (lower for subtle effects).

3. Workflow Execution Errors

Issue 1: "Missing Input" Error (Node Connection Issues)

Symptoms: "Error: Missing input 'model' for node KSampler" or similar.
Causes:

  • Required node inputs are not connected (e.g., KSampler missing model or conditioning).
  • Disconnected nodes (accidental click/drag broke a link).

Solutions:

  1. Use ComfyUI’s "Validate Workflow" tool:
    • Click the Validate button (top-right, checkmark icon) to highlight missing connections.
  2. Reconnect core nodes (basic workflow example):
    • Load CheckpointmodelKSamplermodel
    • Load CheckpointclipCLIP Text Encodeclip
    • CLIP Text EncodeconditioningKSamplerpositive
    • KSamplerimageSave Imageimage

Issue 2: Workflow Queues but No Image Generates

Symptoms: Queue prompt shows "Running" but no output, or output/ folder is empty.
Causes:

  • Save Image node has no valid output path.
  • Empty positive prompt (AI generates black images with no instructions).
  • KSampler steps set to 0 (accidental configuration).

Solutions:

  1. Check Save Image node:
    • Ensure output_path is set to ./output/ (default) or a valid folder.
    • Verify the node is connected to KSamplerimage input.
  2. Add a positive prompt:
    • Avoid empty CLIP Text Encode nodes (e.g., add "photorealistic cat" as a test).
  3. Reset KSampler settings:
    • Set steps to 20–50 (default: 25) and cfg to 7–10 (default: 8).

Issue 3: "Type Error" (Incompatible Node Types)

Symptoms: "TypeError: Cannot convert 'ControlNet' to 'Model'" or similar.
Causes:

  • Connecting incompatible node outputs (e.g., ControlNet to KSamplermodel input).
  • Using outdated nodes (e.g., SD 1.5 nodes with SDXL models).

Solutions:

  1. Match node types to inputs:
    • KSamplermodel input requires a checkpoint model (from Load Checkpoint), not LoRA/ControlNet.
    • KSamplercontrol_net input requires a ControlNet model (from Load ControlNet).
  2. Use model-specific nodes:
    • SDXL: Use CLIP Text Encode (SDXL) (two text inputs) instead of the standard CLIP Text Encode.
    • FLUX: Use FLUX Sampler instead of KSampler.

4. Generation Quality & Output Issues

Issue 1: Black/Blank Images Generated

Symptoms: Output is a solid black image or empty canvas.
Causes:

  • Empty positive prompt (AI has no instructions).
  • Overly strict negative prompts (e.g., "all, everything, image" blocks generation).
  • Model corruption (checkpoint/LoRA is broken).

Solutions:

  1. Test with a simple positive prompt:
    • Use "a red apple on a white background" (avoids ambiguity).
  2. Simplify negative prompts:
    • Remove overbroad terms (e.g., keep only "blurry, low quality, deformed").
  3. Switch to a known-good model:
    • Use ComfyUI’s default checkpoint (e.g., sd_xl_base_1.0.safetensors) to rule out model issues.

Issue 2: Blurry/Low-Quality Output

Symptoms: Images are grainy, pixelated, or lack detail.
Causes:

  • Low image resolution (e.g., 256x256).
  • Insufficient KSampler steps (e.g., <10 steps).
  • Model mismatch (e.g., low-detail model used for photorealism).

Solutions:

  1. Increase resolution:
    • Set KSamplerwidth/height to 1024x1024 (SD 1.5) or 1024x1024–2048x2048 (SDXL).
  2. Adjust KSampler settings:
    • Set steps to 30–50 (more steps = more detail).
    • Use a high-quality sampler (e.g., DPM++ 2M Karras or Euler a for faster results).
  3. Use a detail-focused model:
    • Switch to checkpoints like Realistic Vision (photorealism) or DreamShaper (general quality).

Issue 3: Prompt Not Being Followed (AI Ignores Instructions)

Symptoms: Output doesn’t match the prompt (e.g., "blue car" generates a red car).
Causes:

  • Vague prompts (lack of specific details).
  • Overweighted keywords (distorts the model’s focus).
  • Model bias (some models prioritize certain styles over prompts).

Solutions:

  1. Refine prompts with specific details:
    • ❌ Weak: "blue car" → ✅ Strong: "2024 Tesla Model 3, deep blue metallic paint, sunny day, photorealistic".
  2. Adjust keyword weighting:
    • Avoid excessive parentheses (e.g., ((((blue car))))) → use (blue car:1.2) for mild emphasis).
  3. Use a prompt-friendly model:
    • SDXL models (e.g., sd_xl_base_1.0) follow prompts more accurately than SD 1.5.

5. Performance & Resource Issues

Issue 1: "Out of Memory (OOM)" Error (VRAM Limits)

Symptoms: "CUDA out of memory" during generation (most common with large models/resolutions).
Causes:

  • Image resolution too high (e.g., 4096x4096 on 8GB VRAM).
  • Multiple large models loaded (e.g., SDXL checkpoint + 4 LoRAs + ControlNet).

Solutions:

  1. Reduce resolution:
    • 8GB VRAM: Max 1024x1024 (SD 1.5) or 768x768 (SDXL).
    • 12GB VRAM: Max 1536x1536 (SDXL) or 1024x1024 (SD 1.5 + LoRAs).
  2. Enable model offloading:
    • Go to SettingsPerformance → Check "Enable Model Offloading".
  3. Use quantized models:
    • Replace full-precision (FP16) models with INT4/FP8 quantized versions (reduces VRAM usage by 50%).

Issue 2: Slow Generation Speed (Long Wait Times)

Symptoms: Generation takes 5+ minutes for a 1024x1024 image.
Causes:

  • CPU-only mode (GPU not being used).
  • Outdated GPU drivers.
  • Overly high sampler steps/resolution.

Solutions:

  1. Verify GPU acceleration is enabled:
    • Check the ComfyUI terminal for "Using NVIDIA GPU" (Windows) or "CUDA available: True".
    • If using CPU: Reinstall GPU drivers and ensure PyTorch is GPU-enabled.
  2. Optimize settings:
    • Reduce KSampler steps to 20–30 (balance of speed/quality).
    • Use faster samplers (e.g., Euler a or DPM++ SDE Karras).
  3. Update PyTorch for GPU:
    pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
    

7. Other Common Issues

Issue 1: ComfyUI Crashes When Using Plugins/Extensions

Symptoms: ComfyUI closes immediately after enabling a custom node (e.g., Hunyuan3D, Z-Image-Turbo).
Causes:

  • Incompatible extension (outdated or designed for older ComfyUI versions).
  • Missing extension dependencies.

Solutions:

  1. Disable the problematic extension:
    • Navigate to ComfyUI/custom_nodes/ and delete the extension folder.
  2. Install extension dependencies:
    • Read the extension’s README.md for required packages (e.g., pip install -r requirements.txt).
  3. Use ComfyUI Manager to install compatible extensions:
    • Avoid manual installs unless the extension is not available in the manager.

Issue 2: Text Rendering Issues (Blurry/Incorrect Text in Images)

Symptoms: AI-generated text (e.g., "SALE" on a poster) is unreadable or distorted.
Causes:

  • Most diffusion models are poor at text rendering (SD 1.5/XL struggle with precise typography).
  • Overly complex text prompts (e.g., long sentences or custom fonts).

Solutions:

  1. Use text-focused models:
    • Switch to models like FLUX.1 Kontext or TextGen-XL (optimized for text).
  2. Simplify text prompts:
    • Use short, uppercase words (e.g., "SALE" instead of "Limited Time Sale 50% Off").
  3. Post-process text in Photoshop/GIMP:
    • Generate the image without text, then add text manually for clarity.

Best Practices to Prevent Common Issues

  1. Organize Model Folders: Use subfolders (e.g., models/lora/character/, models/controlnet/pose/) to avoid path confusion.
  2. Backup Workflows: Save successful workflows (File → Save) to avoid rebuilding nodes after crashes.
  3. Test Models Before Use: Verify new models with a simple workflow (e.g., Load CheckpointKSamplerSave Image) before integrating into complex setups.
  4. Regularly Update: Keep ComfyUI and extensions updated (ComfyUI Manager → Update All).
  5. Monitor VRAM: Use tools like NVIDIA GeForce Experience or nvitop (Linux) to track VRAM usage and avoid OOM errors.

Where to Get More Help

If you can’t resolve an issue with this guide:

When asking for help, include:

  • ComfyUI version (check AboutVersion).
  • GPU model and VRAM (e.g., RTX 4070 12GB).
  • Screenshot of the error message and workflow.
  • Model names/links (e.g., "LoRA from Civitai: https://civitai.com/models/12345").