Skip to content

Basic ComfyUI Workflow

Video Lecture

Section Video Links
Basic Workflow Basic Workflow Basic Workflow
Video Timings 00:00 Begin basic Stable Diffusion 1.5 workflow by clearing the workspace.
01:30 Download, install, and load the SD 1.5 Checkpoint model.
03:00 Review checkpoint components and low VRAM data types.
05:00 Connect model to K Sampler, VAE Decode, and Preview Image.
06:30 Add CLIP Text Encode nodes for positive and negative prompts.
08:00 Resolve latent image input error using Empty Latent Image.
10:30 Use Save Image node for file output and sharing workflows.
13:00 Implement LCM LoRa to speed up generation for low VRAM.

Description

We will setup a very basic image generation workflow to familiarize ourselves with the UI and the process of installing a model checkpoint.

The workflow will consist of these nodes.

  • Load Checkpoint : Loads a checkpoint model (e.g., SD 1.5).
  • KSampler : The denoising engine. Uses the prompt, noise, and model to iteratively generate an image in latent space.
  • VAE Decode : Variational Autoencoder. Converts the latent image into a visible RGB image.
  • Save Image : Saves the final generated image to disk.
  • CLIP Text Encode (Positive Prompt) : Encodes your main text prompt into a format the model can use.
  • CLIP Text Encode (Negative Prompt) : Encodes undesired elements (e.g., "blurry, distorted") to help the model avoid them.
  • Empty Latent Image : Creates an initial noise image (latent space) of the desired resolution.

Downloading and Installing Your First AI Model

There are multiple ways to download missing models: you can copy the model name from the error and search for it online (often leading to Hugging Face), or use the "browse templates" feature within the ComfyUI UI and click the download button when prompted. Models are typically large, with the first one (Stable Diffusion version 1.5) being nearly 2 GB, and some models being even larger.

https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/resolve/main/v1-5-pruned-emaonly-fp16.safetensors?download=true

Once downloaded, you should move the model file into the ComfyUI/models/checkpoints/ folder within your ComfyUI installation directory.

📂 ComfyUI/
├── 📂 models/
│   ├── 📂 checkpoints/
│   │   └── v1-5-pruned-emaonly-fp16.safetensors

After pasting the model, you may need to refresh the node definitions.

Tip

Pinning the checkpoints folder to Quick Access in Windows is recommended for easier future navigation.

Generating Your First Image

With the model installed and the UI refreshed, you can then press "run" to execute the workflow. A green border will appear around different nodes in the workflow, indicating the current stage of the process. This will lead to the successful generation of your first image using ComfyUI.

About v1-5-pruned-emaonly-fp16.safetensors

The v1-5-pruned-emaonly-fp16.safetensors model is an optimised version of the Stable Diffusion v1.5 model.

pruned means that this version of the model has had unnecessary parameters removed. This reduces its size and computational cost.

emaonly means that the checkpoint file was generated with the "Exponential Moving Average" (EMA) method, which is often used to improve generalization during training.

The .safetensors extension refers to a model serialization format that is faster and safer than earlier methods. The earlier format with extension .ckpt uses the Python pickle method which can contain arbitrary code, increasing its susceptibility to potential security vulnerabilities.

This model also is a bundle of 3 components.

Component Description
UNet (Diffusion Model) The core neural network that denoises latent noise into an image over multiple steps.
Text Encoder (CLIP text model) Converts your text prompt into a numerical embedding (vector).
VAE (Variational Autoencoder) Encodes/decodes between the latent space (small tensor representation) and pixel space (actual image).

Low VRAM GPUs

For low VRAM GPUs, such as those with 4-6GB, we can install an LCM Lora that can speed up the generation process, at the expense of detail.

For a SD1.5 based checkpoint, we can download pytorch_lora_weights.safetensors, rename to LCM_LoRA_SD15.safetensors and save it into your ComfyUI/models/loras/ folder.

Download SD1.5 Model using WGET

If you are using Runpod, or a similar hosted GPU service, then you can access your running pod/instance using a terminal.

Using the terminal connected to your pod/instance, CD into the folder ComfyUI/models/checkpoints/ and run the command,

#
wget https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/resolve/main/v1-5-pruned-emaonly-fp16.safetensors

Wait for it to fully download, and then try to execute your workflow again.

Troubleshooting

Prompt execution failed Value not in list: ckpt_name: 'v1-5-pruned-emaonly-fp16.safetensors' not in []

Download v1-5-pruned-emaonly-fp16.safetensors from https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/resolve/main/v1-5-pruned-emaonly-fp16.safetensors?download=true and save into your ComfyUI/models/checkpoints/ folder, refresh node definitions, and try again.

v1-5-pruned-emaonly-fp16.safetensors

v1-5-pruned-emaonly.safetensors (FP32 Version)