Revitalize Your Artwork's Faces! How to Use the Detailer with SAM3 Model
- SAM models detect outlines.
- SAM3 is lightweight and highly accurate.
- SAM3 handle the entire Detailer process.
Introduction
Hello, this is Easygoing.
Today, I'll introduce the latest way to use Detailer—a powerful tool for making partial corrections to your AI-generated illustrations.
What is Detailer?
Detailer is a feature that upscales and redraws specific parts of an illustration at a higher resolution.
In image generation, small subjects often result in inaccurate or messy rendering. Detailer solves this by enlarging the target area to the model's recommended resolution and redrawing it, dramatically improving the detail.
Detailer is especially effective for critical areas or parts prone to failure in illustrations, such as faces and hands.
Image Recognition AI + Outline Detection AI
To apply corrections with Detailer, we utilize two types of AI: an Image Recognition AI and an Outline Detection AI.
flowchart LR
subgraph Image Recognition
A1(CLIPseg)
B1(YOLO)
C1(Florence-2)
end
subgraph Outline Detection
A2(SAM)
B2(SAM)
C2(SAM2)
end
subgraph Redrawing
A3(Detailer)
B3(Detailer)
C3(Detailer)
D3(Detailer)
end
subgraph Image Recognition & Outline Detection
D1(SAM3)
end
A1-->A2
A2--->A3
B1-->B2
B2--->B3
C1-->C2
C2--->C3
D1--->D3
| Size | Use Case | Speed | Accuracy | Security | |
|---|---|---|---|---|---|
| CLIPseg + SAM | 3.1 GB | General | π’ | π‘ | π’ |
| YOLO + SAM | 2.6 GB | Single Object | π’ | π’ | ❌ |
| Florence-2 + SAM2 | 4.2 GB | General | π‘ | π‘ | π― |
| SAM3 | 3.5 GB | General | π’ | π’ | π’ |
The AI used for detecting outlines is Meta's SAM model (Segment Anything Model).
The SAM model has evolved from SAM1 to SAM2, and the latest SAM3 model, released on November 19, 2025, has been enhanced with image recognition capabilities in addition to its outline detection.
This integration allows the SAM3 model alone to handle the entire Detailer process.
It recognizes the photo on an ID card as a "face."
The SAM3 model offers improved processing speed due to a simplified process and significantly enhanced detection accuracy.
Applying to Download the SAM3 Model
Now, let's look at the actual steps for using the SAM3 model.
First, you download the SAM3 model from the following Hugging Face page:
To download the SAM3 model, you need to register a Hugging Face account and submit a download request by entering your contact information.
If your application is approved, you will typically receive an email notification within a few hours, and the model will become available for download.
Downloading sam3.pt
Once the model is available for download, select and download the sam3.pt file from the list.
Place the downloaded model file in the ComfyUI/models/sam3 folder.
SAM3 Model Workflow!
Here is the workflow for redrawing faces and hands using the SAM3 model.
Models Used
- Base Model: mellow_pencil-XL-v1.0.0-base_clear
- AI Upscaler: IllustrationJaNai_V1_ESRGAN_135k
- Outline Detection Model: sam3.pt
Let's now walk through the function of each node in the process.
Loading the SAM3 Model
First, load the model using the (down) Load SAM3 model node.
(down) Load SAM3 model Node Parameters
model_path: The path to the model (folder and file name where it was placed).
If you placed the SAM3 model in a non-default folder or renamed the file, enter the correct path in model_path.
Detecting the Target with the SAM3 Model
The SAM3 text segmentation node uses the SAM3 model to detect the target object and its outline.
SAM3 text segmentation Node Parameters
confidence_threshold: The detection threshold.text_prompt: The object(s) to be detected.max_detections: The maximum number of detections.
A small number is shown in the upper left of each object detected by the SAM3 model (the confidence score). By setting the confidence_threshold higher than this number, you can exclude uncertain detections from the process.
For text_prompt, enter the objects you want to detect, separated by commas.
max_detections sets the maximum number of detections, or -1 for unlimited.
Note that the subsequent Masks Combine Batch node serves to merge the individually detected mask regions into a single image.
Adjusting the Mask Region
The regions detected by the SAM3 model tend to "fill in the middle" rather than strictly follow the outline, making them slightly smaller than the actual contour.
To correct this, we use the Grow Mask node to expand the region.
Furthermore, the Image Matting node is used to compare and refine the detected region against the original image. This smooths out complex boundaries, like hair, and ensures a more natural blend.
Redrawing with Detailer
Next, the Mask to Segs node defines the actual area to be redrawn based on the detected region.
Mask to SEGS Node Parameters
combined(Usually False)- Integrates all detected regions into a single, unified area.
crop_factor- Expands the cropped area to allow for a surrounding margin.
- Example:
crop_factor = 3.0means the redrawing process covers an area 3 times the size of the detected region in height and width.
drop_size- Excludes regions whose longest side is smaller than
drop_sizefrom the redrawing process.
- Excludes regions whose longest side is smaller than
Finally, this information is passed to the Detailer (SEGS) node to perform the actual redrawing.
Detailer (SEGS) Node Parameters
guide_size- The size to which the region is enlarged before redrawing.
max_size- If a region is larger than this size, it is scaled down for processing.
cycle- The number of times the redrawing process is repeated.
noise_mask_feather(0–100)- The amount of blurring applied to the redraw boundary.
- This creates a smoother transition and reduces the noticeable seam from the redrawing.
By setting guide_size to the model's recommended resolution, even small targets can be redrawn accurately.
For your reference, here are my settings:
combined: Falsecrop_factor: 1.2drop_size: 100guide_size: 1024max_size: 1024cycle: 1noise_mask_feather: 20
The Actual Process
The actual Detailer process unfolds as follows:
Custom Node Installation
The following custom nodes are used in this workflow:
ComfyUI-SAM3
ComfyUI-Impact-Pack
was-node-suite
ComfyUI-Image-Filters
If any custom nodes are missing, you can have them automatically installed when loading the workflow, or you can install them individually using the ComfyUI-Manager.
How to Use ComfyUI-Manager
Conclusion: Give the SAM3 Model a Try!
- SAM models detect outlines.
- SAM3 is lightweight and highly accurate.
- SAM3 handle the entire Detailer process.
Today, I introduced the SAM3 model.
In the past, the combination of Image Recognition AI and Outline Detection AI led to confusion about which model to use. With the arrival of the SAM3 model, however, it has become possible to complete all tasks using SAM3 alone.
The application range of Image Recognition and Outline Detection AI is vast, and I believe it will become an essential skill for anyone using AI Image Generation in the future.
Thank you for reading until the end!