[{"data":1,"prerenderedAt":492},["ShallowReactive",2],{"content-/en/basic-tutorial/inpainting-workflow/":3},{"id":4,"title":5,"body":6,"description":16,"extension":485,"meta":486,"navigation":487,"path":488,"seo":489,"stem":490,"__hash__":491},"content/en/basic-tutorial/inpainting-workflow.md","Precision Surgery: A Complete Guide to Inpainting in ComfyUI",{"type":7,"value":8,"toc":467},"minimark",[9,13,17,20,23,26,29,34,37,49,52,69,71,75,78,83,95,99,109,116,142,146,161,164,193,211,215,226,229,283,287,301,303,307,313,316,322,326,375,378,380,384,387,421,423,427,430,433],[10,11,5],"h1",{"id":12},"precision-surgery-a-complete-guide-to-inpainting-in-comfyui",[14,15,16],"p",{},"You have just spent twenty minutes dialing in the perfect prompt. The composition is flawless, the lighting is cinematic, but there is one glaring issue: your character has six fingers, or there is a bizarre, artifact-ridden coffee cup floating in the background.",[14,18,19],{},"You don't want to reroll the entire image and lose the seed you worked so hard to find. You just want to fix that specific area. This is where inpainting comes in.",[14,21,22],{},"Doing this in web UIs like Automatic1111 is straightforward but rigid. Doing it in ComfyUI gives you absolute control over the latent space, allowing you to blend masks, inject specific control nets, and dial in the exact denoise values for seamless corrections.",[14,24,25],{},"This guide will walk you through building a rock-solid inpainting pipeline from scratch, covering both standard models (like SDXL) and the newer, more complex Flux ecosystem.",[27,28],"hr",{},[30,31,33],"h2",{"id":32},"_1-the-core-logic-of-node-based-inpainting","1. The Core Logic of Node-Based Inpainting",[14,35,36],{},"Before dragging nodes onto the canvas, you need to understand the data flow. Inpainting is essentially Image-to-Image (I2I), but with a barrier.",[14,38,39,40,44,45],{},"In a standard I2I workflow, you encode an image into latent space, add global noise, and let the sampler denoise the entire picture. Inpainting adds a ",[41,42,43],"strong",{},"Mask",". The mask tells the sampler: ",[46,47,48],"em",{},"\"Only apply noise and denoise the pixels inside this white area. Ignore the black area.\"",[14,50,51],{},"To do this in ComfyUI, you need three specific components that differ from a basic generation workflow:",[53,54,55,59,62],"ol",{},[56,57,58],"li",{},"An image input with a defined mask.",[56,60,61],{},"A specialized VAE encoding node that respects the mask.",[56,63,64,65,68],{},"Targeted prompting to describe ",[46,66,67],{},"only"," what should be inside the masked area.",[27,70],{},[30,72,74],{"id":73},"_2-building-the-standard-inpainting-pipeline","2. Building the Standard Inpainting Pipeline",[14,76,77],{},"Let’s build the foundational workflow. Start with a blank canvas.",[79,80,82],"h3",{"id":81},"step-1-image-and-mask-input","Step 1: Image and Mask Input",[14,84,85,86,90,91,94],{},"Right-click and add ",[87,88,89],"code",{},"Add Node > image > Load Image",". Upload the image you want to fix.\nRight-click the image directly inside the node and select ",[41,92,93],{},"\"Open in MaskEditor\"",". Draw over the area you want to replace. Keep your brush strokes slightly larger than the object itself to give the model room to blend the edges. Click \"Save to node\".",[79,96,98],{"id":97},"step-2-the-checkpoint-and-conditioning","Step 2: The Checkpoint and Conditioning",[14,100,101,102,105,106],{},"Add your ",[87,103,104],{},"Load Checkpoint"," node.\n",[46,107,108],{},"Note: While you can use standard models, using an \"inpainting-specific\" model (which has additional channels trained specifically for masking) will always yield fewer visible seams.",[14,110,111,112,115],{},"Add two ",[87,113,114],{},"CLIP Text Encode (Prompt)"," nodes.",[117,118,119,132],"ul",{},[56,120,121,124,125,127,128,131],{},[41,122,123],{},"Positive Prompt:"," Describe ",[46,126,67],{}," what you want to appear in the mask. If you masked a hand, type ",[87,129,130],{},"a perfect hand, five fingers, resting on the table",". Do not describe the rest of the image.",[56,133,134,137,138,141],{},[41,135,136],{},"Negative Prompt:"," ",[87,139,140],{},"mutated, missing fingers, extra digits, seamless, bad anatomy",".",[79,143,145],{"id":144},"step-3-vae-encode-for-inpainting","Step 3: VAE Encode (for Inpainting)",[14,147,148,149,152,153,156,157,160],{},"This is the most critical step. Do ",[41,150,151],{},"not"," use the standard ",[87,154,155],{},"VAE Encode"," node.\nSearch for and add the ",[87,158,159],{},"VAE Encode (for Inpainting)"," node.",[14,162,163],{},"Connect the nodes as follows:",[117,165,166,176,184],{},[56,167,168,171,172,175],{},[87,169,170],{},"IMAGE"," from your Load Image node -> ",[87,173,174],{},"pixels"," input.",[56,177,178,171,181,175],{},[87,179,180],{},"MASK",[87,182,183],{},"mask",[56,185,186,189,190,175],{},[87,187,188],{},"VAE"," from your Checkpoint Loader -> ",[87,191,192],{},"vae",[14,194,195,202,203,206,207,210],{},[41,196,197,198,201],{},"The ",[87,199,200],{},"grow_mask_by"," parameter:"," This expands your hand-drawn mask by a set number of pixels. Set this to ",[87,204,205],{},"6"," or ",[87,208,209],{},"8",". It creates a buffer zone that helps the new generation blend perfectly into the original image without leaving a harsh, visible line.",[79,212,214],{"id":213},"step-4-the-ksampler-configuration","Step 4: The KSampler Configuration",[14,216,217,218,221,222,225],{},"Add a standard ",[87,219,220],{},"KSampler",". Connect your Model, Positive, Negative, and the ",[87,223,224],{},"LATENT"," output from your VAE Encode (for Inpainting) node.",[14,227,228],{},"Dialing in the KSampler for inpainting requires nuance:",[117,230,231,237,243,256],{},[56,232,233,236],{},[41,234,235],{},"Steps:"," 25-30.",[56,238,239,242],{},[41,240,241],{},"CFG:"," 6.0 to 7.0. Keep it standard.",[56,244,245,137,248,251,252,255],{},[41,246,247],{},"Sampler/Scheduler:",[87,249,250],{},"dpmpp_2m"," and ",[87,253,254],{},"karras"," are highly reliable for structure replacement.",[56,257,258,261,262],{},[41,259,260],{},"Denoise:"," This is your master control.\n",[117,263,264,271,277],{},[56,265,266,267,270],{},"Set it to ",[87,268,269],{},"0.4 - 0.5"," if you just want to slightly alter what is already there (e.g., changing the color of a shirt).",[56,272,266,273,276],{},[87,274,275],{},"0.75 - 0.85"," if you want to completely replace the object (e.g., turning a coffee cup into a potted plant).",[56,278,279,282],{},[46,280,281],{},"Never"," set it to 1.0 for inpainting, or it will generate completely disconnected noise that ignores the surrounding context.",[79,284,286],{"id":285},"step-5-decoding","Step 5: Decoding",[14,288,289,290,292,293,296,297,300],{},"Pull the ",[87,291,224],{}," output from the KSampler into a ",[87,294,295],{},"VAE Decode"," node, connect your VAE, and output to a ",[87,298,299],{},"Save Image"," node. Run the prompt.",[27,302],{},[30,304,306],{"id":305},"_3-the-flux-inpainting-revolution-differential-diffusion","3. The Flux Inpainting Revolution (Differential Diffusion)",[14,308,309,310,312],{},"If you are working with the Flux model family, the rules change entirely. Flux does not use traditional inpainting models, nor does it play nicely with the standard ",[87,311,159],{}," node right out of the box.",[14,314,315],{},"Because of the way Flux handles latent noise across its transformer blocks, forcing a standard mask often results in giant black boxes or deep-fried pixels in the masked area.",[14,317,318,319,141],{},"To inpaint with Flux, you must use ",[41,320,321],{},"Differential Diffusion",[79,323,325],{"id":324},"how-to-adapt-your-workflow-for-flux","How to adapt your workflow for Flux:",[53,327,328,337,353,361],{},[56,329,330,333,334,336],{},[41,331,332],{},"Standard VAE Encode:"," Instead of the inpainting-specific VAE node, use the standard ",[87,335,155],{}," node. Connect your image to it.",[56,338,339,342,343,346,347,349,350,352],{},[41,340,341],{},"Set Latent Noise Mask:"," Add a ",[87,344,345],{},"Set Latent Noise Mask"," node. Connect the ",[87,348,224],{}," from your VAE encoder to it, and connect the ",[87,351,180],{}," directly from your Load Image node.",[56,354,355,358,359,160],{},[41,356,357],{},"The Secret Sauce:"," Search for the ",[87,360,321],{},[56,362,363,364,367,368,371,372,374],{},"Route the ",[87,365,366],{},"MODEL"," output from your Flux checkpoint loader ",[46,369,370],{},"through"," the ",[87,373,321],{}," node, and then into your KSampler.",[14,376,377],{},"Differential Diffusion mathematically bridges the gap between the masked noise and the clean latent background, allowing base Flux models to perform flawless, seamless inpainting without needing a specialized checkpoint.",[27,379],{},[30,381,383],{"id":382},"_4-advanced-troubleshooting-beating-the-seams","4. Advanced Troubleshooting: Beating the \"Seams\"",[14,385,386],{},"Even with a perfect setup, inpainting can sometimes look like a bad Photoshop job. Here is how to fix the most common visual artifacts:",[117,388,389,398,412],{},[56,390,391,394,395,141],{},[41,392,393],{},"The Lighting Doesn't Match:"," If the new object looks like it's glowing or has the wrong shadow direction, your Denoise is likely too low. The model doesn't have enough freedom to redraw the lighting context. Bump the Denoise up to ",[87,396,397],{},"0.85",[56,399,400,403,404,407,408,411],{},[41,401,402],{},"Blurry Output in the Mask:"," This happens when the masked area is physically too small in pixel dimensions. The model is trying to generate a complex object in a 64x64 pixel box. The solution is to use an ",[87,405,406],{},"Image Crop"," node before encoding, inpaint on the cropped close-up, and then use an ",[87,409,410],{},"Image Composite"," node to paste it back into the original high-res image.",[56,413,414,417,418,420],{},[41,415,416],{},"Color Bleeding:"," If colors from the original object are bleeding into the new one, increase your ",[87,419,200],{}," value to 12 or 15 to completely obliterate the original edge data.",[27,422],{},[30,424,426],{"id":425},"_5-structuring-for-distribution-the-json-export","5. Structuring for Distribution (The JSON Export)",[14,428,429],{},"When you have perfected an inpainting pipeline, you rarely want to build it again from scratch. You also might want to share this exact setup with others. ComfyUI's JSON workflow system is perfect for this, but it requires a bit of cleanup before export.",[14,431,432],{},"Before saving your workflow:",[53,434,435,445,461],{},[56,436,437,440,441,444],{},[41,438,439],{},"Clear User Data:"," Right-click your ",[87,442,443],{},"Load Image"," node and ensure there isn't a massive, highly personal 4K image baked into the node data. Replace it with a small, generic placeholder image.",[56,446,447,450,451,251,454,456,457,460],{},[41,448,449],{},"Primitive Nodes:"," If you are sharing this workflow, extract the ",[87,452,453],{},"Denoise",[87,455,200],{}," parameters into ",[87,458,459],{},"Primitive"," nodes. Route them to the top of your workspace. This creates a \"dashboard\" effect, so when someone downloads your JSON, they don't have to hunt through the spaghetti logic to find the controls that actually matter.",[56,462,463,466],{},[41,464,465],{},"Export:"," Click the \"Save\" button on the floating menu. The resulting JSON file contains the absolute blueprint of your pipeline, ready to be hosted, shared, or imported into another environment.",{"title":468,"searchDepth":469,"depth":469,"links":470},"",2,[471,472,480,483,484],{"id":32,"depth":469,"text":33},{"id":73,"depth":469,"text":74,"children":473},[474,476,477,478,479],{"id":81,"depth":475,"text":82},3,{"id":97,"depth":475,"text":98},{"id":144,"depth":475,"text":145},{"id":213,"depth":475,"text":214},{"id":285,"depth":475,"text":286},{"id":305,"depth":469,"text":306,"children":481},[482],{"id":324,"depth":475,"text":325},{"id":382,"depth":469,"text":383},{"id":425,"depth":469,"text":426},"md",{},true,"/en/basic-tutorial/inpainting-workflow",{"title":5,"description":16},"en/basic-tutorial/inpainting-workflow","4_Wo3rzxM0YAvzuAjs4wfAK_54uzOQE_mTcUX8x33do",1773986046872]