Closed rabidcopy closed 1 year ago
Tell me your environment, such as version of diffusers, SD versions etc.
SD.Next version bb912c35 2023-10-16 dev
absl-py==2.0.0
accelerate==0.20.3
addict==2.4.0
aenum==3.1.15
aiofiles==23.2.1
aiohttp==3.8.6
aiosignal==1.3.1
altair==5.1.2
annotated-types==0.6.0
antlr4-python3-runtime==4.9.3
anyio==3.7.1
appdirs==1.4.4
astunparse==1.6.3
async-timeout==4.0.3
attrs==23.1.0
basicsr==1.4.2
beautifulsoup4==4.12.2
blendmodes==2023
cachetools==5.3.1
certifi==2023.7.22
cffi==1.16.0
charset-normalizer==3.3.0
clean-fid==0.1.35
click==8.1.7
clip @ git+https://github.com/openai/CLIP.git@a1d071733d7111c9c014f024669f959182114e33
clip-interrogator==0.6.0
colorama==0.4.6
coloredlogs==15.0.1
colorlog==6.7.0
compel==2.0.2
contourpy==1.1.1
convcolors==2.2.0
cssselect2==0.7.0
cycler==0.12.1
dctorch==0.1.2
deprecation==2.1.0
diffusers==0.21.4
easydev==0.12.1
einops==0.4.1
exceptiongroup==1.1.3
extcolors==1.0.0
facexlib==0.3.0
fastapi==0.103.2
fasteners==0.19
ffmpy==0.3.1
filelock==3.12.4
filetype==1.2.0
filterpy==1.4.5
flatbuffers==23.5.26
fonttools==4.43.1
frozenlist==1.4.0
fsspec==2023.9.2
ftfy==6.1.1
future==0.18.3
fvcore==0.1.5.post20221221
gast==0.4.0
gdown==4.7.1
gfpgan==1.3.8
gitdb==4.0.10
GitPython==3.1.37
google-auth==2.23.3
google-auth-oauthlib==1.0.0
google-pasta==0.2.0
gradio==3.43.2
gradio_client==0.5.0
greenlet==3.0.0
grpcio==1.59.0
h11==0.14.0
h5py==3.10.0
httpcore==0.18.0
httpx==0.25.0
huggingface-hub==0.17.1
humanfriendly==10.0
idna==3.4
imageio==2.31.5
importlib-metadata==6.8.0
importlib-resources==6.1.0
inflection==0.5.1
invisible-watermark @ git+https://github.com/patrickvonplaten/invisible-watermark.git@8f53a758995c310989956be8c4d4caab617643da
iopath==0.1.9
Jinja2==3.1.2
joblib==1.3.2
jsonmerge==1.9.2
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
keras==2.13.1
kiwisolver==1.4.5
kornia==0.7.0
lark==1.1.7
lazy_loader==0.3
libclang==16.0.6
lightning-utilities==0.9.0
llvmlite==0.40.1
lmdb==1.4.1
lpips==0.1.4
lxml==4.9.3
Markdown==3.5
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib==3.8.0
mdurl==0.1.2
mediapipe==0.10.7
mpmath==1.3.0
multidict==6.0.4
networkx==3.1
numba==0.57.1
numexpr==2.8.4
numpy==1.24.4
oauthlib==3.2.2
omegaconf==2.3.0
onnxruntime==1.15.1
open-clip-torch==2.22.0
opencv-contrib-python==4.8.1.78
opencv-contrib-python-headless==4.8.1.78
opencv-python==4.8.1.78
opencv-python-headless==4.7.0.72
opt-einsum==3.3.0
orjson==3.9.9
packaging==23.2
pandas==1.5.3
pexpect==4.8.0
pi-heif==0.13.0
piexif==1.1.3
Pillow==9.5.0
platformdirs==3.11.0
pooch==1.7.0
portalocker==2.8.2
protobuf==3.20.3
psutil==5.9.5
ptyprocess==0.7.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycparser==2.21
pydantic==1.10.11
pydantic_core==2.10.1
pydub==0.25.1
Pygments==2.16.1
PyMatting==1.1.10
pyparsing==3.1.1
pyreadline3==3.4.1
PySocks==1.7.1
python-dateutil==2.8.2
python-multipart==0.0.6
pytorch-lightning==1.9.4
pytz==2023.3.post1
PyWavelets==1.4.1
pywin32==306
PyYAML==6.0.1
referencing==0.30.2
regex==2023.10.3
rembg==2.0.50
reportlab==4.0.6
requests==2.31.0
requests-oauthlib==1.3.1
resize-right==0.0.2
rich==13.6.0
rpds-py==0.10.6
rsa==4.9
safetensors==0.4.0
scikit-image==0.22.0
scikit-learn==1.3.1
scipy==1.11.3
semantic-version==2.10.0
Send2Trash==1.8.2
sentencepiece==0.1.99
six==1.16.0
smmap==5.0.1
sniffio==1.3.0
sounddevice==0.4.6
soupsieve==2.5
SQLAlchemy==2.0.22
starlette==0.27.0
svglib==1.5.1
sympy==1.12
tabulate==0.9.0
tb-nightly==2.15.0a20231014
tensorboard==2.13.0
tensorboard-data-server==0.7.1
tensorflow==2.13.0
tensorflow-estimator==2.13.0
tensorflow-intel==2.13.0
tensorflow-io-gcs-filesystem==0.31.0
termcolor==2.3.0
threadpoolctl==3.2.0
tifffile==2023.9.26
timm==0.9.7
tinycss2==1.2.1
tokenizers==0.13.3
tomesd==0.1.3
toml==0.10.2
tomli==2.0.1
toolz==0.12.0
torch==2.0.0
torch-directml==0.2.0.dev230426
torchdiffeq==0.2.3
torchmetrics==1.2.0
torchsde==0.2.6
torchvision==0.15.1
tqdm==4.66.1
trampoline==0.1.2
transformers==4.30.2
typing_extensions==4.7.1
tzdata==2023.3
urllib3==1.26.15
uvicorn==0.23.2
voluptuous==0.13.1
wcwidth==0.2.8
webencodings==0.5.1
websockets==11.0.3
Werkzeug==3.0.0
wrapt==1.15.0
yacs==0.1.8
yapf==0.40.2
yarl==1.9.2
zipp==3.17.0
For clarity, the same environment/venv is used with kohya-ss's original extraction script and here is the output from it with similar settings.
loading u-net: <All keys matched successfully>
loading vae: <All keys matched successfully>
loading text encoder: <All keys matched successfully>
create LoRA network. base dim (rank): 4, alpha: 4
neuron dropout: p=None, rank dropout: p=None, module dropout: p=None
create LoRA for Text Encoder:
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
create LoRA network. base dim (rank): 4, alpha: 4
neuron dropout: p=None, rank dropout: p=None, module dropout: p=None
create LoRA for Text Encoder:
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
Text encoder is different. 0.019997619092464447 > 1e-06
calculating by svd
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 264/264 [01:24<00:00, 3.13it/s]
create LoRA network from weights
create LoRA for Text Encoder:
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
enable LoRA for text encoder
enable LoRA for U-Net
Loading extracted LoRA weights: <All keys matched successfully>
LoRA weights are saved to: test2.safetensors
Edit: It seems modules_dim
is always None
and isn't configurable anywhere to set the --dim
equivalent in the extension. I'm not entirely sure if this is just visual on what it reports or if the u-net is still being extracted.
So my apologies, note sure if this is an issue or not.. It may be that just the modules aren't being reported correctly.
Now I changed make lora script using latest Kohya's script, try it.
When I use the original kohya-ss lora extraction script, a file is created where it reports that u-net modules exist and were extracted and put into the LoRA. When using this extension it produces a file with no u-net modules and essentially is only the text encoder part. This happens with every combination of models I've tried. On last commit as of today.
loading u-net:
loading vae:
loading text encoder:
create LoRA network. base dim (rank): 128, alpha: 128
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 0 modules.
create LoRA network. base dim (rank): 128, alpha: 128
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 0 modules.
calculating by svd
100%|██████████████████████████████████████████████████████████████████████████████████| 72/72 [00:23<00:00, 3.03it/s]
enable LoRA for text encoder
enable LoRA for U-Net
LoRA has 216 weights.
Loading extracted LoRA weights:
LoRA weights are saved to: models\Lora\test.safetensors