Skip to content

AnimateDiff

Video Lecture

Section Video Links
AnimateDiff AnimateDiff AnimateDiff

Description

AnimateDiff will allow us to turn our static images and text prompts into animated videos by generating a sequence of images that transition smoothly.

We will use the ComfyUI-AnimateDiff-Evolved custom node. This is an improved version from the original ComfyUI-AnimateDiff

AnimateDiff-Evolved will allow us more control over the motion in our videos, and we can create longer videos than what we saw with just Stable Video Diffusion alone.

We will start learning about AnimateDiff by using SD1.5 models.

We will use also the ControlNets created from the previous lesson.

If you don't have those ControlNet videos, then you can download and extract the contents of this walking-controlnets.zip file into your ComfyUI/input folder.

Install ComfyUI-AnimateDiff-Evolved

Install the custom node ComfyUI-AnimateDiff-Evolved using the manager, or you can use your command/terminal prompt.

  1. Navigate to your ComfyUI/custom_nodes folder.
  2. Run,
    git clone https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved.git
    
  3. Restart ComfyUI

Start Workflow

Drag this workflow into ComfyUI.

Install the AnimateDiff Motion Model

We can download one or more of the original motion model checkpoints (mm_sd_v14, mm_sd_v15, mm_sd_v15_v2, v3_sd15_mm) from https://huggingface.co/guoyww/animatediff/tree/cd71ae134a27ec6008b968d6419952b0c0494cf2

However, ByteDance has released some much faster versions under there AnimateDiff-Lightning project.

So instead, download the 1step, 2step, 4step & 8step safetensors files from https://huggingface.co/ByteDance/AnimateDiff-Lightning/tree/main

Place the models that you've download into your ComfyUI/models/animatediff_models folder.

Install the ComfyUI-Advanced-ControlNet

Install the custom node ComfyUI-Advanced-ControlNet using the manager, or you can use your command/terminal prompt.

  1. Navigate to your ComfyUI/custom_nodes folder.
  2. Run,
    git clone https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet.git
    
  3. Restart ComfyUI

SD1.5 IP-Adapter

We will use the IP-Adapter with SD1.5 models, so we should also ensure we have a compatible IP-Adapter model.

Download ip-adapter-plus_sd15.safetensors and save ito your ComfyUI/models/ipadapter folder.

Style Image

Positive Prompt

white background
long black hair in ponytail
floral tshirt
light pink neck braced jacket
yellow jeans

Install Optional Checkpoints

Place epiCrealism.safetensors (huggingface) into your ComfyUI/models/checkpoints folder.

Place 欧美动漫 ToonYou.safetensors (huggingface) into your ComfyUI/models/checkpoints folder.

Place cardosAnime_v20.safetensors (huggingface) into your ComfyUI/models/checkpoints folder.

The cardosAnime_v20 will produce much better definition results if you also use the vae-ft-mse-840000-ema-pruned.safetensors VAE.

Place vae-ft-mse-840000-ema-pruned.safetensors into your ComfyUI/models/vae folder.

Final Workflow

Original AnimateDiff

ComfyUI-AnimateDiff-Evolved (GitHub)

ComfyUI-VideoHelperSuite (GitHub)

AnimateDiff-Lightning (huggingface)

ComfyUI_IPAdapter_plus (github)