This Python script (UNetExtractor.py) processes SafeTensors files for Stable Diffusion 1.5 (SD 1.5), Stable Diffusion XL (SDXL), and FLUX models. It extracts the UNet into a separate file and creates a new file with the remaining model components (without the UNet).
Above example: UNetExtractor.py flux1-dev.safetensors flux1-dev_unet.safetensors flux1-dev_non_unet.safetensors --model_type flux --verbose
We've developed an extension for AUTOMATIC1111's Stable Diffusion Web UI that allows you to load and use the extracted UNet files directly within the interface. This extension seamlessly integrates with the txt2img workflow, enabling you to utilize the space-saving benefits of separated UNet files without compromising on functionality.
To use the extension, please visit our UNet Loader Extension Repository for installation and usage instructions.
Using UNets instead of full checkpoints can save a significant amount of disk space, especially for models that utilize large text encoders. This is particularly beneficial for models like FLUX, which has a large number of parameters. Here's why:
This tool helps you extract UNets from full checkpoints, allowing you to take advantage of these space-saving benefits across SD 1.5, SDXL, and open-source FLUX models.
Clone this repository or download the UNetExtractor.py
script.
It's recommended to create a new virtual environment:
python -m venv unet_extractor_env
Activate the virtual environment:
unet_extractor_env\Scripts\activate
source unet_extractor_env/bin/activate
Install the required libraries with specific versions for debugging:
pip install numpy==1.23.5 torch==2.0.1 safetensors==0.3.1
If you're using CUDA, install the CUDA-enabled version of PyTorch:
pip install torch==2.0.1+cu117 -f https://download.pytorch.org/whl/cu117/torch_stable.html
Replace cu117
with your CUDA version (e.g., cu116
, cu118
) if different.
Optionally, install psutil for enhanced system resource reporting:
pip install psutil==5.9.0
Note: The versions above are examples and may need to be adjusted based on your system requirements and CUDA version. These specific versions are recommended for debugging purposes as they are known to work together. For regular use, you may use the latest versions of these libraries.
Run the script from the command line with the following syntax:
python UNetExtractor.py <input_file> <unet_output_file> <non_unet_output_file> --model_type <sd15|sdxl|flux> [--verbose] [--num_threads <num>] [--gpu_limit <percent>] [--cpu_limit <percent>]
<input_file>
: Path to the input SafeTensors file (full model)<unet_output_file>
: Path where the extracted UNet will be saved<non_unet_output_file>
: Path where the model without UNet will be saved--model_type
: Specify the model type, either sd15
for Stable Diffusion 1.5, sdxl
for Stable Diffusion XL, or flux
for FLUX models--verbose
: (Optional) Enable verbose logging for detailed process information--num_threads
: (Optional) Specify the number of threads to use for processing. If not specified, the script will automatically detect the optimal number of threads.--gpu_limit
: (Optional) Limit GPU usage to this percentage (default: 90)--cpu_limit
: (Optional) Limit CPU usage to this percentage (default: 90)For Stable Diffusion 1.5 using CUDA (if available):
python UNetExtractor.py path/to/sd15_model.safetensors path/to/output_sd15_unet.safetensors path/to/output_sd15_non_unet.safetensors --model_type sd15 --verbose
For Stable Diffusion XL using CUDA (if available):
python UNetExtractor.py path/to/sdxl_model.safetensors path/to/output_sdxl_unet.safetensors path/to/output_sdxl_non_unet.safetensors --model_type sdxl --verbose
For FLUX models using CUDA (if available) with 8 threads and 80% GPU usage limit:
python UNetExtractor.py path/to/flux_model.safetensors path/to/output_flux_unet.safetensors path/to/output_flux_non_unet.safetensors --model_type flux --verbose --num_threads 8 --gpu_limit 80
safetensors
library.After extracting UNet files using this tool, you can easily use them in AUTOMATIC1111's Stable Diffusion Web UI:
For detailed instructions, please refer to the UNet Loader Extension Repository.
When running the script with the --verbose
flag, you'll see detailed debugging information, including:
Example debug output:
2024-08-17 21:06:30,500 - DEBUG - Current UNet count: 770
2024-08-17 21:06:30,500 - DEBUG - ---
2024-08-17 21:06:31,142 - DEBUG - Processing key: vector_in.out_layer.weight
2024-08-17 21:06:31,142 - DEBUG - Tensor shape: torch.Size([3072, 3072])
2024-08-17 21:06:31,172 - DEBUG - Classified as non-UNet tensor
2024-08-17 21:06:31,172 - DEBUG - Current UNet count: 770
2024-08-17 21:06:31,172 - DEBUG - ---
2024-08-17 21:06:31,203 - INFO - Total tensors processed: 780
2024-08-17 21:06:31,203 - INFO - UNet tensors: 770
2024-08-17 21:06:31,203 - INFO - Non-UNet tensors: 10
2024-08-17 21:06:31,203 - INFO - Unique key prefixes found: double_blocks, final_layer, guidance_in, img_in, single_blocks, time_in, txt_in, vector_in
This output helps identify issues with tensor classification, resource usage, and overall processing flow.
If you encounter any issues:
--verbose
flag to get detailed debugging information.A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash.
Try downgrading NumPy to version 1.23.5 as recommended in the installation instructions.
safetensors
library installed.If you continue to experience issues after trying these steps, please open an issue on the GitHub repository with details about your system configuration, the command you're using, and the full error message or debugging output.
Contributions, issues, and feature requests are welcome! Feel free to check issues page if you want to contribute.
This project is licensed under the MIT License. See the LICENSE file for details.
If you use UNet Extractor and Remover in your research or projects, please cite it as follows:
[For commercial licensing cyberjunk77@gmail.com] (captainzero93). (2024). UNet Extractor and Remover for Stable Diffusion 1.5, SDXL, and FLUX. GitHub. https://github.com/captainzero93/unet-extractor
safetensors
library developed by the Hugging Face team.