How to Find Models on Civitai and Hugging Face! Complete List of Licenses for the Latest Local Image & Video Generation AI Models
- Civitai.red allows viewing of NSFW content
- Next-generation models have separated Text Encoder, UNet/Transformer, and VAE components
- Always check the license before use
Introduction
Hello, this is Easygoing.
In this post, I’ll explain how to search for models on Civitai and Hugging Face, along with a comprehensive list of licenses for local image and video generation AI models.
Today, we’re focusing on how to find image generation AI models.
Finding Models on Civitai and Hugging Face!
As of May 2026, the two primary sites for discovering image generation AI models are Civitai and Hugging Face.
Civitai
- Established: November 2022
- Specialized in: Image generation AI and video generation AI
- Feature: Fun browsing experience with sample images
Civitai is a platform dedicated to image and video generation AI models, launched after the release of the open-source Stable Diffusion 1.
On Civitai, models are shared along with sample illustrations and prompts, so you can easily see what kind of images each model can generate while browsing.
Civitai’s Two Domains
As of May 2026, Civitai operates two domains:
- Civitai.com: All-ages version, supports credit card payments
- Civitai.red: Displays NSFW content, cryptocurrency payments only
NSFW stands for Not Safe For Work — adult or violent content, etc.
Originally, Civitai.com allowed NSFW content as well, but on May 22, 2025, it received a payment suspension notice from credit card companies including VISA due to handling of excessively explicit content.
After implementing measures to separate all-ages content as “Civitai Green,” the platform settled on its current structure from April 16, 2026: Civitai.com for the all-ages version and Civitai.red for the NSFW version.
Civitai.com is the main site. It does not display NSFW content beyond PG-13, but credit card payments are available.
Civitai.red allows viewing of NSFW content, but payments are limited to cryptocurrency.
Except for NSFW content, both sites show the same models. Use Civitai.com if you want to pay with a credit card, and Civitai.red if you want to view NSFW content.
Searching for Models on Civitai
Let’s try searching for models on Civitai.com.
First, visit the Civitai.com homepage.
To search for models, select the Models tab (second from the left).
Then click Filters on the right to refine your search. Here, I selected Month under Period and Checkpoint under Type.
This displays the highest-rated Checkpoint models (models containing Text Encoder, UNet/Transformer, and VAE all-in-one) from the past month.
If you also want to see NSFW content, change the URL in the address bar from civitai.com to civitai.red.
Browse the sample images and find models you like.
This time, the Hyphoria model looked promising, so I selected it.
Checking Model Details
The Hyphoria model page shows many more samples and recommended settings.
Scroll to the very bottom and you’ll find a License section where you can check what usage purposes are permitted for the model.
The Hyphoria model has the following licenses:
- CreativeML Open RAIL++-M: Standard SDXL license
- Fair AI Public License 1.0-SD: Requires disclosure of merge recipes
- NoobAI-XL-License: Prohibits commercial use of the model
When a model has multiple licenses, all restrictions listed apply.
The Hyphoria model allows creation of derivative models, but derivative models must disclose merge recipes and commercial use is prohibited.
Models created by combining multiple models are called merge models. Merge models inherit all license restrictions from the original models, so the newer the model, the more complex its licensing tends to become.
Image Generation AI Model Licenses
Image Generation AI Model License List! Is Commercial Use Allowed? | Kimama / Easygoing
SDXL Anime Model Licenses
On Civitai model pages, a license is also shown under the model card on the right, but this section only allows one license to be selected at the time of upload, so it is often incomplete. Always check the main description text for the full License information.
Downloading and Placing the Model
Now let’s actually download the Hyphoria model.
Click the Download button on the right-side model card to start downloading.
Once the download is complete, place the file in the following folder:
For ComfyUI
-
ComfyUI/models/checkpoints
For Stability Matrix
-
Models/StableDiffusion
Hyphoria Model Recommended Settings Workflow
I created a ComfyUI workflow using Hyphoria’s recommended settings.
Hugging Face
In the second half, I’ll introduce Hugging Face, another major platform for publishing models alongside Civitai.
- Established in 2016 with the philosophy of making AI accessible to everyone
- Publishes a wide variety of AI models, including large language models (LLMs)
- Often recognized as the official distribution hub
Hugging Face was founded in 2016 under the philosophy of making AI open to everyone. It hosts all kinds of AI models, from large language models (LLMs) onward, and can also be combined with CPU/GPU rental services to provide AI applications.
While Civitai sometimes hosts reposted models from Hugging Face and other sources, Hugging Face is where original models are often uploaded and is frequently recognized as the official distribution location.
Downloading a Photorealistic Model
Let’s try downloading a model from Hugging Face.
This time, we’ll download the Z-Image model from Alibaba, which excels at photorealistic illustrations.
Next-Generation Models Have Separated Text Encoder, Transformer, and VAE
Image generation AI models consist of the following three components:
flowchart TB
subgraph Checkpoint
A1(Text Encoder)
B1(Unet / Transformer)
C1(VAE)
end
- Text Encoder: Analyzes the prompt
- UNet / Transformer: Generates the image
- VAE: Compresses the latent space
The Hyphoria model we downloaded earlier from Civitai is a derivative of Stable Diffusion XL and was distributed in Checkpoint format, which bundles all three components.
In contrast, models released after SDXL are generally distributed with the three components separated due to increased model size.
gantt
title Image Generative AI Roadmap
dateFormat YYYY-MM-DD
tickInterval 12month
axisFormat %Y
section Stability AI
Stable Diffusion 1 : 2022-08-22, 2026-05-08
Stable Diffusion XL : 2023-07-26, 2026-05-08
Stable Diffusion 3 : 2024-06-12, 2026-05-08
section Fal.ai
AuraFlow : 2024-07-12, 2026-05-08
section Black Forest Labs
Flux.1 : 2024-08-01, 2026-05-08
Flux.2 : 2025-11-25, 2026-05-08
section DeepSeek.ai
janus-pro : 2025-01-25, 2026-05-08
section HiDream-ai
HiDream : 2025-04-06, 2026-05-08
section Zhipu AI
CogVideoX : 2024-08-06, 2026-05-08
GLM-Image : 2026-01-12, 2026-05-08
section Rhymes AI
Allegro : 2024-10-22, 2026-05-08
section Genmo
Mochi : 2024-10-25, 2026-05-08
section Tencent
Hunyuan video : 2024-12-03, 2026-05-08
Hunyuan image : 2025-09-09, 2026-05-08
section lllyasviel
Framepack : 2025-04-17, 2026-05-08
section Lightricks
LTX : 2024-12-11, 2026-05-08
section StepFun
Step-Video-T2V : 2025-02-17, 2026-05-08
section Alibaba
Wan : 2025-02-25, 2026-05-08
Qwen-Image : 2025-08-04, 2026-05-08
Z-Image : 2025-11-25, 2026-05-08
section NVIDIA
Cosmos-Predict2 : 2025-04-30, 2026-05-08
section CircleStone Labs
Anima : 2026-01-26, 2026-05-08
section Baidu
ERNIE-Image : 2026-04-07, 2026-05-08
Z-Image model composition
flowchart TB
subgraph Z-Image
A1(Qwen3-4B)
B1(Z-Image-Transformer)
C1(Flux.1-vae)
end
The Z-Image model uses Alibaba’s large language model Qwen3-4B as the text encoder, Z-Image-Transformer as the transformer, and Flux.1-vae as the VAE.
Let’s download each component from Hugging Face.
Text Encoder: Qwen3-4B
Z-Image uses Alibaba’s Qwen3-4B as its text encoder.
The original Qwen-3-4B model is published by Alibaba on Hugging Face in split form, but Comfy-Org also distributes a combined version optimized for image generation. We’ll use that this time.
The model is available in BF16, FP8, and FP4 formats. FP8 and FP4 are roughly half the size and can run faster on the latest NVIDIA GPUs, but result in lower image quality.
This time, we’ll download the original BF16 version.
GPU and Floating-Point Compatibility
| FP32 | FP16 | BF16 | FP8 | FP4 | |
|---|---|---|---|---|---|
| NVIDIA RTX 5000 Series (2025–) | ✅ | ✅ | ✅ | ✅ | ✅ |
| NVIDIA RTX 4000 Series (2022–) | ✅ | ✅ | ✅ | ✅ | ❌ |
| NVIDIA RTX 3000 Series (2020–) | ✅ | ✅ | ✅ | ❌ | ❌ |
| NVIDIA RTX 2000 Series (2018–) | ✅ | ✅ | ❌ | ❌ | ❌ |
| NVIDIA GTX 1000 Series (2016–) | ✅ | ⚠️ | ❌ | ❌ | ❌ |
| AMD Radeon | ✅ | ⚠️ | ❌ | ❌ | ❌ |
| Intel Arc | ✅ | ⚠️ | ❌ | ❌ | ❌ |
Text Encoder Compression Formats and Image Quality
Place the downloaded model in the following folder:
ComfyUI
-
ComfyUI/models/text_encoders
Stability Matrix
-
Models/TextEncoders
Transformer: Z-Image_clear_photoreal
For the transformer, I’ll use the Z-Image_clear_photoreal model that I published.
The Z-Image_clear_photoreal model is a merge of Alibaba’s Z-Image-Base and Z-Image-Turbo models. It offers richer variations while improving stability.
Download the BF16 version and place it in the following folder:
ComfyUI
-
ComfyUI/models/diffusion_models
Stability Matrix
-
Models/DiffusionModels
VAE: Z-Image_natural_vae
Finally, download the VAE.
The default VAE for Z-Image models is Flux.1 [schnell]’s VAE, but this time I’ll use Z-Image_natural_vae, which I readjusted for more natural color representation.
Place the downloaded model in the following folder:
ComfyUI
-
ComfyUI/models/vae
Stability Matrix
-
Models/VAE
Z-Image_clear_photoreal Workflow
Here is a ready-to-run workflow using the downloaded models.
List of Image & Video Generation Models and Their Licenses
Finally, here is a compiled list of local image and video generation AI models along with their licenses.
gantt
title Image Generative AI Roadmap
dateFormat YYYY-MM-DD
tickInterval 12month
axisFormat %Y
section Stability AI
Stable Diffusion 1 : 2022-08-22, 2026-05-08
Stable Diffusion XL : 2023-07-26, 2026-05-08
Stable Diffusion 3 : 2024-06-12, 2026-05-08
section Fal.ai
AuraFlow : 2024-07-12, 2026-05-08
section Black Forest Labs
Flux.1 : 2024-08-01, 2026-05-08
Flux.2 : 2025-11-25, 2026-05-08
section DeepSeek.ai
janus-pro : 2025-01-25, 2026-05-08
section HiDream-ai
HiDream : 2025-04-06, 2026-05-08
section Zhipu AI
CogVideoX : 2024-08-06, 2026-05-08
GLM-Image : 2026-01-12, 2026-05-08
section Rhymes AI
Allegro : 2024-10-22, 2026-05-08
section Genmo
Mochi : 2024-10-25, 2026-05-08
section Tencent
Hunyuan video : 2024-12-03, 2026-05-08
Hunyuan image : 2025-09-09, 2026-05-08
section lllyasviel
Framepack : 2025-04-17, 2026-05-08
section Lightricks
LTX : 2024-12-11, 2026-05-08
section StepFun
Step-Video-T2V : 2025-02-17, 2026-05-08
section Alibaba
Wan : 2025-02-25, 2026-05-08
Qwen-Image : 2025-08-04, 2026-05-08
Z-Image : 2025-11-25, 2026-05-08
section NVIDIA
Cosmos-Predict2 : 2025-04-30, 2026-05-08
section CircleStone Labs
Anima : 2026-01-26, 2026-05-08
section Baidu
ERNIE-Image : 2026-04-07, 2026-05-08
| Developer | Model | License | Commercial Use |
|---|---|---|---|
| Stability AI | Stable Diffusion 1.x | CreativeML Open RAIL-M | ⚠️-✅ |
| Stable Diffusion XL | CreativeML Open RAIL++-M | ⚠️-✅ | |
| Stable Diffusion 3 | Stability AI Community License | ⚠️ | |
| Fal.ai | AuraFlow | Apache-2.0 | ✅ |
| Black Forest Labs | Flux.1 [schnell] Flux.2 [klein] 4B |
Apache-2.0 | ✅ |
| Flux.1 [dev] Flux.2 [dev] Flux.2 [klein] 9B |
FLUX [dev] Non-Commercial License | ❌-⚠️ | |
| DeepSeek | Janus-Pro | DeepSeek Model License | ⚠️ |
| HiDream-ai | HiDream | MIT | ✅ |
| Zhipu AI | CogVideoX | CogVideoX License | ⚠️ |
| GLM-Image | MIT | ✅ | |
| Genmo | Mochi | Apache-2.0 | ✅ |
| Rhymes AI | Allegro | Apache-2.0 | ✅ |
| Lightricks | LTX-Video | LTX Community License | ⚠️ |
| Alibaba | Wan | Apache-2.0 | ✅ |
| Qwen-Image | Apache-2.0 | ✅ | |
| Z-Image | Apache-2.0 | ✅ | |
| Tencent | Hunyuan Video | Tencent Hunyuan Community License | ⚠️ |
| Hunyuan Image | Tencent Hunyuan Community License | ⚠️ | |
| lllyasviel | Framepack | Tencent Hunyuan Community License | ⚠️ |
| StepFun | Step-Video-T2V | MIT | ✅ |
| NVIDIA | Cosmos-Predict2 | NVIDIA Open Model License | ⚠️ |
| CircleStone Labs | Anima | NVIDIA Open Model License CircleStone Labs Non-Commercial License |
❌-⚠️ |
| Baidu | ERNIE-Image | Apache-2.0 | ✅ |
Links
Links point to the latest versions as of May 2026 where multiple versions exist.
- Stable Diffusion 1.5 (fork)
- Stable Diffusion XL
- Stable Diffusion 3.5 Large
- AuraFlow 0.3
- Flux.1 [schnell]
- Flux.1 [dev]
- Flux.2 [dev]
- Flux.2 [klein] 9B
- Flux.2 [klein] 4B
- Janus-Pro-7B
- HiDream-I1-Dev
- CogVideoX1.5-5B-SAT
- GLM-Image
- mochi-1-preview
- Allegro
- LTX-2.3
- Wan2.2-I2V-A14B
- Qwen-Image-2512
- Z-Image
- HunyuanVideo
- HunyuanImage-3.0
- FramePack_F1_I2V_HY_20250503
- stepvideo-t2v
- Cosmos-Predict2.5-2B
- Anima-Preview3-base
- ERNIE-Image
Licenses
Among image and video generation AI models, those with open standard licenses (MIT・Apache-2.0) can be used for commercial purposes with confidence.
On the other hand, models with proprietary licenses from the rights holders often have interpretations that are not yet fully established. Always check the license yourself before using them.
Summary: Try Using Civitai and Hugging Face
- Civitai.red allows viewing of NSFW content
- Next-generation models have separated Text Encoder, UNet/Transformer, and VAE components
- Always check the license before use
In this post, I covered how to find models on Civitai and Hugging Face and provided a list of licenses for local image and video generation AI models.
I try to try out new AI models as soon as they are released, but when I compiled this list, I realized there are still many models I haven’t touched yet.
I plan to continue reviewing various image generation AI models from time to time.
Thank you for reading until the end!