CGI/PGI | Illustration | "Restricted | #Manga #Anime | by venezArt©04.25

Mastering Line Art Coloring with ComfyUI: A Step-by-Step Guide

Coloring line art can be a challenging but rewarding process, especially when using AI-powered tools like ComfyUI. This open-source node-based workflow tool offers a flexible way to generate stunning digital art with artificial intelligence. In this tutorial, we’ll guide you through the process of coloring line art using ComfyUI, from setup to final output. This tutorial is not for your complete ComfyUI beginner, since we will be creating a Workflow to help with our generation and coloring. If you are, don’t be afraid to follow along, since we will be progressing step by step through the process.

Step 1: Setting Up ComfyUI

Before diving into the coloring process, ensure you have ComfyUI installed and running on your system. If you haven’t installed it yet, follow these steps:

  1. Download ComfyUI from the official GitHub repository.
  2. Install any required dependencies (Python, Torch, and Stable Diffusion models).
  3. Launch the ComfyUI interface in your browser or through a local environment.

Step 2: Preparing Your Line Art

To get the best results, your line art should be:

  • Clean and high resolution (preferably 1024×1024 or higher).
  • Saved in PNG format with a transparent or white background.
  • Well-defined, with closed gaps to help the AI recognize the shapes properly.

Step 3: Loading Your Line Art into ComfyUI

  1. First, we have to set up a basic node system so we can choose our model, prompt input and other basic nodes to generate our image.  Open ComfyUI and load your favorite Stable Diffusion model by using the Load Checkpoint Node. For a list of Stable Diffusion Models go here.
  2. From the Clip Socket, let’s create two Prompt Inputs nodes. One for our main prompt and the other for our negative prompt.
  3. From the Model Socket, create a new Sampler node. Proceed to connect both Prompt Inputs into the Positive Socket and Negative Socket. The Sampler node allows us to control over details of the image being created.
  4. In the Sampler node, create a new VAE Decode node, and connect the Load Checkpoint VAE Socket to the VAE Decode node socket.
  5. Finally from the VAE Decode Image Socket, create a Preview Image node, which will display our generated image.

Your ComfyUI should look similar to the image above.

  1. Let’s create a node to help us load our line art, and a node to help us color the line art. So let’s start by creating a IPAdapter Advanced node (coloring node). Replace the Model Socket coming out of Load Checkpoint node and connect it to the IPAdapter node, and from its Model Socket, reconnect to the Sampler node.
  2. From the IPAdapter node, create a IPAdapter Loader node and a Load Clip Vision node.  Choose your preferred IPAdapter Model and Clip Vision, or leave it as default. Proceed to create the Load Image node from the Image Socket.  This will be the reference image we will be using to color our line image..
  3. Now let’s create another Load Image node so that we can load the Line Image we want to color. I will be using the image below.
  1. From the Load Image, let’s create an Invert Image node, so we can pass it through a Linear ControlNet Model node. This will give us control over the line art, so connect the Positive and Negative Sockets to the Positive and Negative Sockets in the Sampler node.

Your ComfyUI should look similar to the image above.

Step 4: Applying Color with AI

  1. Select a reference image you would like to use as a color reference to color your Line Art.
  2. In the Positive Prompt node, proceed to describe your line artwork and reference the details and colors from the reference image, for more detail and accuracy.
  3. Use the Negative Prompt node, to exclude items you do not want from the reference image or items you want to exclude during the generative process.
  4. Hit Run. Wait for your image to generate. Depending on your results, you will have to adjust the following. Reduce your Sampler Denoise, until you reach the desired outcome. Also, reduce the ControlNet node strength to .60 and end percent as well. Re-Run. You might run a few times until you get your desired out come.

Colored image from references

Step 5: Refining and Exporting Your Artwork

  • If the colors aren’t perfect, tweak the prompt and rerun the generation.
  • Use an image-to-image node to refine specific areas.
  • Use an upscale node to increase size and detail.
  • Include other nodes like sharpen and upscale to improve workflow and save in your drive, for quick access.
  • Once satisfied, save your final colored artwork in PNG or JPEG format.

Final Thoughts

With ComfyUI, coloring line art becomes an intuitive and experimental process. The node-based workflow allows for endless customization, ensuring that artists can achieve their desired results with precision. Whether you’re a beginner or a seasoned digital artist, ComfyUI provides a powerful AI-assisted tool to enhance your creative workflow.

Try experimenting with different prompts and model settings to see what works best for your unique style, Enjoy!

We'd love to hear your thoughts! Drop a comment below and join the conversation!