Skip to content

Inpainting

Video Lecture

Section Video Links
Inpainting Inpainting Inpainting
Video Timings 00:00 Introduction to inpainting and setting up the workflow
00:25 Creating and editing image masks in ComfyUI
01:30 Initial inpainting using Stable Diffusion 1.5
02:10 Downloading and installing the dedicated inpainting model
03:00 Improved inpainting results; understanding 'grow mask'
04:45 Applying inpainting for old photograph restoration
06:20 Iterative mask adjustments for refined image restoration

Description

Inpainting in AI image generation refers to the process of filling in or modifying specific parts of an image using a generative model.

You provide a base image with a mask that indicates the area to change, and a prompt describing the desired content to replace it with.

Inpainting is useful for more precision when removing objects, changing details, or even restoring damaged images.

Now, we can use any model for generating the new replaced content, but if you want the new content to blend in seamlessly in a similar style and context, then we can use a specific model for this.

Download the 512-inpainting-ema.safetensors (huggingface) checkpoint model and save it into your ComfyUI checkpoints folder.

You can use one of your own images for this lesson. Make sure it is 512x512 since the 512-inpainting-ema.safetensors model is optimised to that resolution,

or you can copy/paste these images into into your ComfyUI when necessary in the video.

car on dusty road

A photo that needs restoration

512-inpainting-ema.safetensors (huggingface)