Skip to content

Text to Image

Video Lecture

Section Video Links
Text to Image Text to Image Text to Image

Description

We will experiment with a text to image (T2I) workflow and learn about the KSampler.

We will use Stable Diffusion 1.5 since it is fast and will work on most GPUs.

It works best with short, clear prompts and simple concepts, and it has a natural, realistic visual style.

Start Workflow

Using a compatible browser, you can drag this image into ComfyUI and run the same workflow that generated this actual image.

ComfyUI_00001_.png

KSampler

  • Seed: sets the random starting noise; same seed = same output.
  • Control After Generate: Options to modify the Seed after generation.
  • Steps: number of denoising iterations; more steps = higher quality but slower.
  • CFG: guidance strength; higher = closer to prompt, lower = more creative.
  • Sampler: numerical method controlling how the model moves through noise levels during denoising.
  • Scheduler: generates the sequence of sigma (noise scale) values for each step.
  • Denoise: how much of the latent is denoised (1.0 = full, <1.0 = partial/refinement).

Example Prompts

  • a breathtaking alpine valley at sunrise
  • a car on a dusty road
  • a cat on a skateboard
  • a bicycle in amsterdam
  • speeding through a city with bright lights. strobe effect
  • a person reading a newspaper
  • a portrait of a person, in the style of picasso
  • modern architectural buildings with clean lines, beautiful gardens with water features, situated on the edge of a cliff, overlooking the fjords

Scheduler Graphs

The scheduler controls how the "sigmas" (noise levels, or variance schedule) are distributed across the denoising steps.

Each scheduler generates a sequence of noise scales over the course of N steps, and those values affect how much weight is given to the model's prediction vs. the running latent at each iteration.