Skip to main content

What are preprocessors?

Preprocessors are foundational tools that extract structural information from images. They convert images into conditioning signals like depth maps, lineart, pose skeletons, and surface normals. These outputs drive better control and consistency in ControlNet, image-to-image, and video workflows. Using preprocessors as separate workflows enables:
  • Faster iteration without full graph reruns
  • Clear separation of preprocessing and generation
  • Easier debugging and tuning
  • More predictable image and video results

Depth estimation

Depth estimation converts a flat image into a depth map representing relative distance within a scene. This structural signal is foundational for controlled generation, spatially aware edits, and relighting workflows. This workflow emphasizes:
  • Clean, stable depth extraction
  • Consistent normalization for downstream use
  • Easy integration with ControlNet and image-edit pipelines
Depth outputs can be reused across multiple passes, making it easier to iterate without re-running expensive upstream steps.

Depth Estimation Workflow

Run on Comfy Cloud

Lineart conversion

Lineart preprocessors distill an image down to its essential edges and contours, removing texture and color while preserving structure. This workflow is designed to:
  • Produce clean, high-contrast lineart
  • Minimize broken or noisy edges
  • Provide reliable structural guidance for stylization and redraw workflows
Lineart pairs especially well with depth and pose, offering strong structural constraints without overconstraining style.

Lineart Conversion Workflow

Run on Comfy Cloud

Pose detection

Pose detection extracts body keypoints and skeletal structure from images, enabling precise control over human posture and movement. This workflow focuses on:
  • Clear, readable pose outputs
  • Stable keypoint detection suitable for reuse across frames
  • Compatibility with pose-based ControlNet and animation pipelines
By isolating pose extraction into a dedicated workflow, pose data becomes easier to inspect, refine, and reuse.

Pose Detection Workflow

Run on Comfy Cloud

Normals extraction

Normals estimation converts a flat image into a surface normal map—a per-pixel direction field that describes how each part of a surface is oriented (typically encoded as RGB). This signal is useful for relighting, material-aware stylization, and highly structured edits. This workflow emphasizes:
  • Clean, stable normal extraction with minimal speckling
  • Consistent orientation and normalization for reliable downstream use
  • ControlNet-ready outputs for relighting, refinement, and structure-preserving edits
  • Reuse across passes so you can iterate without re-running earlier steps
Normal outputs can be used to:
  • Drive relight/shading changes while preserving geometry
  • Add a stronger 3D-like structure to stylization and redraw pipelines
  • Improve consistency across frames when paired with pose/depth for animation work

Normals Extraction Workflow

Run on Comfy Cloud

Getting started

1

Update ComfyUI

Update ComfyUI to the latest version or use Comfy Cloud
2

Load the workflow

Download the workflows linked above or find them in Templates on Comfy Cloud
3

Install dependencies

Follow the pop-up dialogs to download the required models and custom nodes
4

Run the workflow

Review inputs, adjust settings, and run the workflow