FLUX Controlnet
Collection
2 items
β’
Updated
β’
3
This repository hosts an improved Inpainting ControlNet checkpoint for the alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Alpha model, developed by the AlimamaCreative Team.
Our latest inpainting model brings significant improvements compared to the previous version:
The following images were generated using a ComfyUI workflow (click here to download) with these settings:
control-strength = 1.0, control-end-percent = 1.0, true_cfg = 1.0
| Image & Prompt Input | Alpha Version | Beta Version |
|---|
Download example ComfyUI workflow here.
Using t5xxl-FP16 and flux1-dev-fp8 models for 30-step inference @1024px & H20 GPU:
Different results can be achieved by adjusting the following parameters:
| Parameter | Recommended Range | Effect |
|---|---|---|
| control-strength | 0.6 - 1.0 | Controls how much influence the ControlNet has on the generation. Higher values result in stronger adherence to the control image. |
| controlend-percent | 0.35 - 1.0 | Determines at which step in the denoising process the ControlNet influence ends. Lower values allow for more creative freedom in later steps. |
| true-cfg (Classifier-Free Guidance Scale) | 1.0 or 3.5 | Influences how closely the generation follows the prompt. Higher values increase prompt adherence but may reduce image quality. |
pip install diffusers==0.30.2
git clone https://github.com/alimama-creative/FLUX-Controlnet-Inpainting.git
image_path, mask_path, and prompt in main.py, then execute:python main.py
Our model weights are released under the FLUX.1 [dev] Non-Commercial License.
Base model
black-forest-labs/FLUX.1-dev