Announcing ComfyUI-easygoing-nodes! Improved SDXL Prompt Parsing Prototype and Color Correction

a young woman with short black hair and blue eyes stands in a neon - lit bar or club, wearing a white shirt and black blazer, surrounded by various objects and a neon sign square
  • Prototype improvements for CLIP-G processing
  • CPU processing for HiDream’s text encoder
  • LAB color space correction node

Introduction

Hello, I'm Easygoing!

This time, I'm excited to introduce ComfyUI-easygoing-nodes, a custom node package for ComfyUI. Here's an overview of what it offers.

Core Feature: Improved SDXL Prompt Parsing Prototype

ComfyUI-easygoing-nodes includes three main functionalities:

  • Improved prototype for SDXL prompt parsing
  • Processing HiDream's text encoder on the CPU
  • Color correction and other custom nodes

Let’s dive into each one.

Illustration of an anime-style female character standing in a bar at night, with a neon sign reading "BAR COMFY" and shelves of bottles in the background.

CLIP-L and CLIP-G

SDXL incorporates two types of text encoders:

  • CLIP-L (lightweight, excels at short text)
  • CLIP-G (high-performance, handles longer text)

CLIP-L and CLIP-G process prompts independently. For example, with the input prompt:

Input Prompt

cat, cute, sleeping  

CLIP-L Parsing (Representation)

cat, cute, sleeping <|end of text|> 

CLIP-G Parsing (Representation)

cat, cute, sleeping<|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|><|padding|>  

CLIP-L is designed for short prompts and uses a clear end of text. In contrast, CLIP-G, built for longer prompts, fills unused space with padding tokens instead of an end tag.

This is a correct implementation based on CLIP-G and SDXL specifications, so there’s no issue with it.

However, for short prompts (especially negative prompts for anime models with tag-based inputs), the abundance of <|padding|> tokens can dilute attention from meaningful words.

Simplifying CLIP-G Parsing

To address this, I applied CLIP-L’s straightforward parsing approach to CLIP-G. Specifically, I replaced the ComfyUI/comfy/sdxl_clip.py file with a modified version provided by shiba*2.

Since Stable Diffusion 3.5 and HiDream also rely on this sdxl_clip.py file for CLIP-G, this change affects those models as well.

Comparing Illustrations

Let’s see the difference in action!

Original sdxl_clip.py

Illustration of an anime-style female character standing at a port at night, with neon-lit cityscape and water reflections in the background.

Prototype sdxl_clip.py

Illustration of an anime-style female character standing at a port at night, generated with the prototype sdxl_clip.py, featuring a neon-lit cityscape and water reflections.

Image Similarity: 99–99.5%

Comparison of the two images, showing Original vs. Prototype, with Color Difference and Grayscale Difference images, displaying MAE: 2.66 (99.0%) and SSIM: 1.00 (99.5%).

Image Comparison Tool

There are subtle differences between the images, and it’s hard to say which is better.

If you prefer the look of the prototype, it’s worth giving it a try.

The prototype sdxl_clip.py is automatically applied when loading ComfyUI-easygoing-nodes, so once the custom node is installed, no further setup is needed.

Processing HiDream’s Text Encoder on CPU

The second feature is an implementation that processes HiDream’s text encoder on the CPU.

Illustration of an anime-style female character standing in a bar at night, with a neon sign reading "Bar Comfy" and shelves of bottles.

Recent updates to ComfyUI caused an issue where specifying device: cpu for HiDream’s text encoder wasn’t properly recognized, resulting in all processing defaulting to the GPU.

To fix this, I modified ComfyUI/comfy/text_encoder/hidream.py to ensure all HiDream text encoder processing occurs on the CPU.

Depending on your setup, running HiDream’s text encoder on the CPU can approximately double processing speed.

Like sdxl_clip.py, this modified hidream.py is automatically applied when loading ComfyUI-easygoing-nodes.

Additional Custom Nodes

Now, let’s explore the custom nodes added in ComfyUI-easygoing-nodes.

Color Correction in LAB Color Space

HDR Effects with LAB Adjust

UI of the HDR Effects with LAB Adjust node under the easygoing-nodes category, with parameters like hdr_intensity and shadow_intensity.

The HDR Effects with LAB Adjust node is a custom node that performs color correction in the LAB color space.

While PNG images in image generation are typically processed in the RGB color space, this node converts them to LAB color space for brightness and color adjustments.

RGB Color Space

R (Red) Channel
0 (Black) 255 (Red)
G (Green) Channel
0 (Black) 255 (Green)
B (Blue) Channel
0 (Black) 255 (Blue)

In RGB color space, colors are expressed by combining Red, Green, and Blue.

Brightness is determined by the sum of RGB values, meaning that changing color affects brightness and changing brightness affects color.

LAB Color Space

L (Lightness) Channel
0 (Black) 100 (White)
A (Green↔Red) Channel
-128 (Green) 0 (Neutral) 127 (Red)
B (Blue↔Yellow) Channel
-128 (Blue) 0 (Neutral) 127 (Yellow)

In contrast, LAB color space separates lightness and color into distinct channels, allowing independent adjustments without mutual interference.

Parameter Definitions

The HDR Effects with LAB Adjust node includes the following settings:

Re-display of the HDR Effects with LAB Adjust node UI, showing parameters like hdr_intensity and shadow_intensity.
  • Brightness Adjustments

    • hdr_intensity: Strength of brightness adjustment
    • shadow_intensity: Darkens shadow areas
    • highlight_intensity: Brightens highlight areas
    • gamma_intensity: Darkens the overall image
  • Color Adjustments

    • ab_strength: Strength of color adjustment
    • a_adjustment: Adjusts green ↔ red balance
    • b_adjustment: Adjusts blue ↔ yellow balance
  • Final Touches

    • contrast: Adjusts contrast
    • enhance_color: Adjusts saturation (color intensity)
Illustration of an anime-style female character standing in a bar at night, with a neon sign reading "Bar Comfy" and shelves of bottles.

By default, the HDR Effects with LAB Adjust node:

  • Emphasizes dark areas to increase contrast
  • Adds a slight red tint targeting skin tones
  • Reduces the yellowish tint common in AI-generated images

Comparing Illustrations

Let’s compare the results:

The HDR Effects with LAB Adjust node is effective on its own, but combining it with ComfyUI-SuperBeastsAI Auto Color Correction often yields even better results. I recommend using them together.

Original

Illustration of an anime-style female character standing in a bar at night, with a bluish, muted original version.
The image appears bluish and muted.

AI Auto Color Correction (Super POP)

Illustration of an anime-style female character standing in a bar at night, with Super POP auto color correction removing muteness and enhancing skin tones.
Removes muteness and clarifies skin tones.

HDR Correction (HDR Effects with LAB Adjust)

Illustration of an anime-style female character standing in a bar at night, with HDR Effects with LAB Adjust reducing yellowness and enhancing contrast.
Reduces yellowness and tightens blacks for a sharper look.

Workflow

Here’s the workflow used for this color correction:

Screenshot of ComfyUI workflow, including Super POP Color Adjustment and HDR Effects with LAB Adjust nodes.

Other Custom Nodes

Here’s a quick overview of additional custom nodes:

Quadruple CLIP Loader (Set Device)

UI of the Quadruple CLIP Loader (Set Device) node under easygoing-nodes, with settings for clip_name1 to clip_name4 and device.

Triple CLIP Loader (Set Device)

UI of the Triple CLIP Loader (Set Device) node under easygoing-nodes, with settings for clip_name1 to clip_name3 and device.

Load CLIP Vision (Set Device)

UI of the Load CLIP Vision (Set Device) node under easygoing-nodes, with settings for clip_name and device.

These nodes, similar to ComfyUI’s Load CLIP and DualCLIPLoader, allow you to specify the text encoder’s loading destination.

Setting device: cpu loads the model into RAM and processes it on the CPU.

Benefits of processing text encoders on CPU

Save Image With Prompt

UI of the Save Image With Prompt node under easygoing-nodes, with settings for filename_prefix, positive_prompt, negative_prompt, caption, and numbers.
  • filename_prefix: File name
  • positive_prompt
  • negative_prompt
  • caption: Image caption
  • numbers: Include sequential numbers at the end of the file name

The Save Image With Prompt node saves PNG images with metadata for positive_prompt, negative_prompt, and caption.

The caption metadata is designed for cases where captions are automatically generated using models like CLIP-Vision or Florence-2.

Setting numbers to False omits sequential numbering, unlike the default True behavior.

Sample Workflow

ComfyUI workflow diagram including the Save Image With Prompt node and a preview of the generated image.

Reference: Display in XnView MP (image management and editing software)

Screenshot of XnView MP, showing a thumbnail list of images and metadata (positive_prompt, negative_prompt, caption) for the selected image.

With an app that can read metadata, you can view prompts and captions without opening ComfyUI.

Installing ComfyUI-easygoing-nodes

Here’s how to install ComfyUI-easygoing-nodes, which follows the standard process for custom nodes.

ComfyUI-Manager search screen, showing Easygoing Nodes after searching for "easygoing."

Search for “easygoing” in ComfyUI-Manager, and Easygoing Nodes will appear in the list.

Manual Installation

For manual installation, run the following command in the custom_nodes folder:

git clone https://github.com/easygoing0114/ComfyUI-easygoing-nodes.git

Verifying ComfyUI-easygoing-nodes Loading

When ComfyUI starts, a successful load of ComfyUI-easygoing-nodes will display the following log:

Loading ComfyUI-easygoing-nodes with module replacements...
✓ Successfully replaced comfy.sdxl_clip with custom implementation
✓ Successfully replaced comfy.text_encoders.hidream with custom implementation
Module replacement process completed!

This log confirms that the CLIP-G prototype and HiDream text encoder processing are functioning correctly.

Conclusion: Try ComfyUI-easygoing-nodes!

  • Prototype improvements for CLIP-G processing
  • CPU processing for HiDream’s text encoder
  • LAB color space correction node

This was my first time creating custom nodes for ComfyUI, and I was amazed by its flexible implementation.

Not only can you create original nodes, but you can also replace ComfyUI’s core files with modified versions without altering the originals, allowing for safe customization.

Illustration of an anime-style female character standing in a bar at night, with a neon sign reading "A BAR COMFY" and shelves of bottles.
Go ahead and customize it to your liking!

I’d like to extend my gratitude to Shiba*2 and SuperBeastsAI for generously providing code for this project.

Thank you for reading!