Open ReppiksProductions opened 1 week ago
gen_img_diffusers.py
and gen_img.py
doesn't support FLUX.1 yet, please use flux_minimum_inference.py
in the repo for now. See --help
for usage.
gen_img_diffusers.py
andgen_img.py
doesn't support FLUX.1 yet, please useflux_minimum_inference.py
in the repo for now. See--help
for usage.
Thanks! I will test this out when I get a break from the current training I'm doing.
I've tried this after venv activate:
python flux_minimal_inference.py --ckpt_path "D:/flux1-dev2pro.safetensors" --clip_l "J:/stablediffusion1111s2/Data/Models/CLIP/clip_l.safetensors" --t5xxl "J:/stablediffusion1111s2/Data/Models/CLIP/t5xxl_fp16.safetensors" --ae "J:/stablediffusion1111s2/Data/Models/VAE/ae.safetensors" --output_dir "D:/" --prompt "A close up of a woman face" --steps 30 --guidance 3.5 --seed 666 --width 1024 --height 1024
but I get this error:
Traceback (most recent call last):
File "J:\train\sd-scripts\flux_minimal_inference.py", line 21, in <module>
from networks import oft_flux
ImportError: cannot import name 'oft_flux' from 'networks' (J:\train\sd-scripts\venv\Lib\site-packages\networks\__init__.py)
What could be wrong?
I already tried again pip install -r requirements.txt
I've tried this after venv activate:
python flux_minimal_inference.py --ckpt_path "D:/flux1-dev2pro.safetensors" --clip_l "J:/stablediffusion1111s2/Data/Models/CLIP/clip_l.safetensors" --t5xxl "J:/stablediffusion1111s2/Data/Models/CLIP/t5xxl_fp16.safetensors" --ae "J:/stablediffusion1111s2/Data/Models/VAE/ae.safetensors" --output_dir "D:/" --prompt "A close up of a woman face" --steps 30 --guidance 3.5 --seed 666 --width 1024 --height 1024
but I get this error:
Traceback (most recent call last): File "J:\train\sd-scripts\flux_minimal_inference.py", line 21, in <module> from networks import oft_flux ImportError: cannot import name 'oft_flux' from 'networks' (J:\train\sd-scripts\venv\Lib\site-packages\networks\__init__.py)
What could be wrong?
I already tried again pip install -r requirements.txt
You should double check to make sure that file is actually there. Check inside the networks folder that is inside of the sd-scripts folder. (ex: \kohya_ss\sd-scripts\networks\oft_flux.py ) If it is missing you may need to reinstall Kohya-ss or grab that file from the correct branch off GitHub. If it is there, I don't know, someone else will have to help you.
One more test you can do is run python flux_minimal_inference.py --help
and see if you get the same error. There may be something wrong with the import section of flux_minimal_inference.py. The top of my flux_minimal_inference.py looks like this:
import argparse
import datetime
import math
import os
import random
from typing import Callable, List, Optional
import einops
import numpy as np
import torch
from tqdm import tqdm
from PIL import Image
import accelerate
from transformers import CLIPTextModel
from safetensors.torch import load_file
from library import device_utils
from library.device_utils import init_ipex, get_preferred_device
from networks import oft_flux
init_ipex()
from library.utils import setup_logging, str_to_dtype
setup_logging()
import logging
...
EDIT: I figured it out! Adding the argument --merge_lora_weights significantly reduced the VRAM usage! It's using about 14 GB VRAM and I'm getting 2s/it or less now. I assumed merging the LORA weight would make it take up more memory, but now I see it does the opposite. Thanks again for you help @kohya-ss!
@kohya-ss I am able to generate images using the flux_minimal_inference.py! Unfortunately it's very slow when I add my LORA I want to test. I have a RTX 4060 w/ 16GB VRAM 64GB RAM. I managed to pass arguments to get it to require less memory but I am still getting about 1.5 - 2 GB going into my shared GPU memory. Without the LORA it's around 13 - 14 GBs used. That ahre memory being used is giving me like 60s - 80s/it vs 2s/it without. (Generating a 768 x 1024 image)
I noticed looking through my terminal history, it seems that when the training script generates an image, it loads the 2nd text encoder to CPU: 'text_encoder [1] dtype: torch.bfloat16, device: cpu' Maybe that can be done here too.
Are there any arguments I can pass or something I can change in one of the scripts to get it to generate with a little less VRAM?
These are the arguments I am passing:
python flux_minimal_inference.py --ckpt_path "C:_Python Projects\StableDiffusionModels\Stable-diffusion\Flux\flux1-dev.safetensors" --clip_l "C:_Python Projects\StableDiffusionModels\text_encoder\clip_l.safetensors" --t5xxl "C:_Python Projects\StableDiffusionModels\text_encoder\t5xxl_fp16.safetensors" --ae "C:_Python Projects\StableDiffusionModels\VAE\ae.safetensors" --clip_l_dtype float16 --ae_dtype float16 --t5xxl_dtype float8 --flux_dtype float8 --offload --output_dir "E:_Dropbox\Dropbox\AI Source material\SD\Created Characters\Paityn_Hill\test_renders" lora_weights "C:_Python Projects\StableDiffusionModels\Lora\ptyn1_2-000160.safetensors" --interactive
gen_img_diffusers.py
andgen_img.py
doesn't support FLUX.1 yet, please useflux_minimum_inference.py
in the repo for now. See--help
for usage.
Thanks, Kohya! In flux_minimum_inference.py
, it is not possible (yet) to chose a specific sampler / scheduler, right?
The top of my flux_minimal_inference.py looks like this:
Yeah it's weird. "oft_flux.py" file is there. The top of my flux_minimal_inference.py looks exactly the same as yours. I just did a "git pull" to update my folder... I might need to reinstall everythng.
The python flux_minimal_inference.py --help
gives me the same error.
I have a kohya_gui installation as well. I've activate the venv there and it worked there, so IDK, probably have to reinstall kohya_ss... Thanks!
EDIT: I figured it out! Adding the argument --merge_lora_weights significantly reduced the VRAM usage! It's using about 14 GB VRAM and I'm getting 2s/it or less now. I assumed merging the LORA weight would make it take up more memory, but now I see it does the opposite. Thanks again for you help @kohya-ss!
Glad to see it worked. LoRA merging is done in advance so memory usage is the same as without LoRA.
Thanks, Kohya! In
flux_minimum_inference.py
, it is not possible (yet) to chose a specific sampler / scheduler, right?
Yes, currently only the official scheduler implementation from Black Forest Labs is available, which is very similar to Euler but with slightly different values.
Yeah it's weird. "oft_flux.py" file is there. The top of my flux_minimal_inference.py looks exactly the same as yours. I just did a "git pull" to update my folder... I might need to reinstall everythng.
I'm not sure how the GUI installs sd-scripts, but you might be able to reinstall sd-scripts as pip install -e ./sd-scripts
. However, this may destroy your environment, so it's better to reinstall if possible.
@diodiogod I think I see your issue! Looking at your error message again, it looks like flux_minimum_inference.py
is trying to import from a "networks" directory inside the ..\venv\Lib\site-packages\ directory of the virtual environment instead of the "networks" directory inside of sd-scripts. I am only using GUI version and the virtual environment was setup in the Kohya-ss directory, a directory above the sd-scripts. (../Kohya-ss/sd-scripts)
Traceback (most recent call last): File "J:\train\sd-scripts\flux_minimal_inference.py", line 21, in <module> from networks import oft_flux ImportError: cannot import name 'oft_flux' from 'networks' (J:\train\sd-scripts\venv\Lib\site-packages\networks\__init__.py)
I don't know enough about python to know if there is syntax for you to make sure you are importing from a certain directory. You may want to setup the virtual environment in the same directory that the sd-scripts directory is in. That may fix it. (I say may because of info below) You would just have to activate the virtual environment there and just cd into sd-scripts.
Setting up the venv in a directory above sd-scripts still might cause conflict if import looks in the venv first instead of the directory where the python script is. What did you use to setup the virtual environment? On my system (Windows 11) I always use python to create and activate the venv. That directory in ..\venv\Lib\site-packages\ in all of my environments is called "networkx", so that may be why myself and others are not getting the error.
I'm very interested in command line generations like what can be found here: https://note.com/kohya_ss/n/n2693183a798e
Unfortunately this isn't up to date with Flux generations or maybe I'm not understanding the coding enough. If it can't be done with
python gen_img_diffusers.py
how would I go about doing it?
I don't know enough about python to try to figure out how the scripts do it when generating sample images while it is training.
The reason I would want to do this, is because I do not what to have to load another GUI like A1111 or ForgeUI to test trained LORAs. I'm not too familiar with ComfyUI either. When the scripts generate the sample images, it does so in a clean way without running up the memory. Plus my generations don't look the same even when using the same parameters for generation.