Closed P-ara-cell closed 1 year ago
According to the log there is ann issue with some of the installation packeges:
OSError: [WinError 1455] ページング ファイルが小さすぎるため、この操作を完了できません。 Error loading "E:\dreambooth0k\sd-scripts\venv\lib\site-packages\torch\lib\cudnn_cnn_infer64_8.dll" or one of its dependencies.
Best is probably to follow the folowing instructions:
Open a new powershell window and run:
pip freeze > uninstall.txt
pip uninstall -r uninstall.txt
Then redo the kohya_ss installation.
Thank you for replying. I did exactly what you said. But the result was the same, LORA could not be run in the same way with the same error. I have included the console logs from uninstalling, reinstalling, and running again below for your reference. I made a pull down menu because it is too long.
(venv) PS E:\dreambooth0k\sd-scripts> pip list
Package Version
---------- -------
pip 22.2.2
setuptools 63.2.0
wheel 0.38.4
[notice] A new release of pip available: 22.2.2 -> 22.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
(venv) PS E:\dreambooth0k\sd-scripts> pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu116
Collecting torch==1.12.1+cu116
Using cached https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp310-cp310-win_amd64.whl (2388.4 MB)
Collecting torchvision==0.13.1+cu116
Using cached https://download.pytorch.org/whl/cu116/torchvision-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl (2.6 MB)
Collecting typing-extensions
Using cached typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting requests
Using cached requests-2.28.2-py3-none-any.whl (62 kB)
Collecting numpy
Using cached numpy-1.24.1-cp310-cp310-win_amd64.whl (14.8 MB)
Collecting pillow!=8.3.*,>=5.3.0
Using cached Pillow-9.4.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting certifi>=2017.4.17
Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting urllib3<1.27,>=1.21.1
Using cached urllib3-1.26.14-py2.py3-none-any.whl (140 kB)
Collecting idna<4,>=2.5
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.0.1-cp310-cp310-win_amd64.whl (96 kB)
Installing collected packages: charset-normalizer, urllib3, typing-extensions, pillow, numpy, idna, certifi, torch, requests, torchvision
Successfully installed certifi-2022.12.7 charset-normalizer-3.0.1 idna-3.4 numpy-1.24.1 pillow-9.4.0 requests-2.28.2 torch-1.12.1+cu116 torchvision-0.13.1+cu116 typing-extensions-4.4.0 urllib3-1.26.14
[notice] A new release of pip available: 22.2.2 -> 22.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
(venv) PS E:\dreambooth0k\sd-scripts> pip install --use-pep517 --upgrade -r requirements.txt
Processing e:\dreambooth0k\sd-scripts
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting accelerate==0.15.0
Using cached accelerate-0.15.0-py3-none-any.whl (191 kB)
Collecting transformers==4.25.1
Using cached transformers-4.25.1-py3-none-any.whl (5.8 MB)
Collecting ftfy
Using cached ftfy-6.1.1-py3-none-any.whl (53 kB)
Collecting albumentations
Using cached albumentations-1.3.0-py3-none-any.whl (123 kB)
Collecting opencv-python
Using cached opencv_python-4.7.0.68-cp37-abi3-win_amd64.whl (38.2 MB)
Collecting einops
Using cached einops-0.6.0-py3-none-any.whl (41 kB)
Collecting diffusers[torch]==0.10.2
Using cached diffusers-0.10.2-py3-none-any.whl (503 kB)
Collecting pytorch_lightning
Using cached pytorch_lightning-1.9.0-py3-none-any.whl (825 kB)
Collecting bitsandbytes==0.35.0
Using cached bitsandbytes-0.35.0-py3-none-any.whl (62.5 MB)
Collecting tensorboard
Using cached tensorboard-2.11.2-py3-none-any.whl (6.0 MB)
Collecting safetensors==0.2.6
Using cached safetensors-0.2.6-cp310-cp310-win_amd64.whl (268 kB)
Collecting gradio
Using cached gradio-3.16.2-py3-none-any.whl (14.2 MB)
Collecting altair
Using cached altair-4.2.0-py3-none-any.whl (812 kB)
Collecting easygui
Using cached easygui-0.98.3-py2.py3-none-any.whl (92 kB)
Requirement already satisfied: requests in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from -r requirements.txt (line 16)) (2.28.2)
Collecting timm==0.4.12
Using cached timm-0.4.12-py3-none-any.whl (376 kB)
Collecting fairscale==0.4.4
Using cached fairscale-0.4.4-py3-none-any.whl
Collecting tensorflow<2.11
Using cached tensorflow-2.10.1-cp310-cp310-win_amd64.whl (455.9 MB)
Collecting huggingface-hub
Using cached huggingface_hub-0.11.1-py3-none-any.whl (182 kB)
Collecting packaging>=20.0
Using cached packaging-23.0-py3-none-any.whl (42 kB)
Collecting pyyaml
Using cached PyYAML-6.0-cp310-cp310-win_amd64.whl (151 kB)
Requirement already satisfied: torch>=1.4.0 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (1.12.1+cu116)
Requirement already satisfied: numpy>=1.17 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (1.24.1)
Collecting psutil
Using cached psutil-5.9.4-cp36-abi3-win_amd64.whl (252 kB)
Collecting filelock
Using cached filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting regex!=2019.12.17
Using cached regex-2022.10.31-cp310-cp310-win_amd64.whl (267 kB)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1
Using cached tokenizers-0.13.2-cp310-cp310-win_amd64.whl (3.3 MB)
Collecting tqdm>=4.27
Using cached tqdm-4.64.1-py2.py3-none-any.whl (78 kB)
Requirement already satisfied: Pillow in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from diffusers[torch]==0.10.2->-r requirements.txt (line 7)) (9.4.0)
Collecting importlib-metadata
Using cached importlib_metadata-6.0.0-py3-none-any.whl (21 kB)
Requirement already satisfied: torchvision in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from timm==0.4.12->-r requirements.txt (line 17)) (0.13.1+cu116)
Collecting wcwidth>=0.2.5
Using cached wcwidth-0.2.6-py2.py3-none-any.whl (29 kB)
Collecting qudida>=0.0.4
Using cached qudida-0.0.4-py3-none-any.whl (3.5 kB)
Collecting scikit-image>=0.16.1
Using cached scikit_image-0.19.3-cp310-cp310-win_amd64.whl (12.0 MB)
Collecting scipy
Using cached scipy-1.10.0-cp310-cp310-win_amd64.whl (42.5 MB)
Collecting opencv-python-headless>=4.1.1
Using cached opencv_python_headless-4.7.0.68-cp37-abi3-win_amd64.whl (38.1 MB)
Requirement already satisfied: typing-extensions>=4.0.0 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from pytorch_lightning->-r requirements.txt (line 8)) (4.4.0)
Collecting fsspec[http]>2021.06.0
Downloading fsspec-2023.1.0-py3-none-any.whl (143 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.0/143.0 kB 8.9 MB/s eta 0:00:00
Collecting lightning-utilities>=0.4.2
Using cached lightning_utilities-0.5.0-py3-none-any.whl (18 kB)
Collecting torchmetrics>=0.7.0
Using cached torchmetrics-0.11.0-py3-none-any.whl (512 kB)
Collecting markdown>=2.6.8
Using cached Markdown-3.4.1-py3-none-any.whl (93 kB)
Collecting werkzeug>=1.0.1
Using cached Werkzeug-2.2.2-py3-none-any.whl (232 kB)
Collecting absl-py>=0.4
Using cached absl_py-1.4.0-py3-none-any.whl (126 kB)
Requirement already satisfied: wheel>=0.26 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (0.38.4)
Collecting google-auth<3,>=1.6.3
Using cached google_auth-2.16.0-py2.py3-none-any.whl (177 kB)
Collecting grpcio>=1.24.3
Using cached grpcio-1.51.1-cp310-cp310-win_amd64.whl (3.7 MB)
Collecting protobuf<4,>=3.9.2
Using cached protobuf-3.20.3-cp310-cp310-win_amd64.whl (904 kB)
Collecting google-auth-oauthlib<0.5,>=0.4.1
Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
Requirement already satisfied: setuptools>=41.0.0 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (63.2.0)
Collecting tensorboard-plugin-wit>=1.6.0
Using cached tensorboard_plugin_wit-1.8.1-py3-none-any.whl (781 kB)
Collecting tensorboard-data-server<0.7.0,>=0.6.0
Using cached tensorboard_data_server-0.6.1-py3-none-any.whl (2.4 kB)
Collecting matplotlib
Using cached matplotlib-3.6.3-cp310-cp310-win_amd64.whl (7.2 MB)
Collecting ffmpy
Using cached ffmpy-0.3.0-py3-none-any.whl
Collecting python-multipart
Using cached python-multipart-0.0.5.tar.gz (32 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting orjson
Using cached orjson-3.8.5-cp310-none-win_amd64.whl (202 kB)
Collecting websockets>=10.0
Using cached websockets-10.4-cp310-cp310-win_amd64.whl (101 kB)
Collecting aiohttp
Using cached aiohttp-3.8.3-cp310-cp310-win_amd64.whl (319 kB)
Collecting uvicorn
Using cached uvicorn-0.20.0-py3-none-any.whl (56 kB)
Collecting jinja2
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting httpx
Using cached httpx-0.23.3-py3-none-any.whl (71 kB)
Collecting markdown-it-py[linkify,plugins]
Using cached markdown_it_py-2.1.0-py3-none-any.whl (84 kB)
Collecting markupsafe
Using cached MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl (16 kB)
Collecting pycryptodome
Using cached pycryptodome-3.16.0-cp35-abi3-win_amd64.whl (1.7 MB)
Collecting fastapi
Using cached fastapi-0.89.1-py3-none-any.whl (55 kB)
Collecting aiofiles
Using cached aiofiles-22.1.0-py3-none-any.whl (14 kB)
Collecting pydub
Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting pandas
Downloading pandas-1.5.3-cp310-cp310-win_amd64.whl (10.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.4/10.4 MB 43.5 MB/s eta 0:00:00
Collecting pydantic
Using cached pydantic-1.10.4-cp310-cp310-win_amd64.whl (2.1 MB)
Collecting entrypoints
Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB)
Collecting toolz
Using cached toolz-0.12.0-py3-none-any.whl (55 kB)
Collecting jsonschema>=3.0
Using cached jsonschema-4.17.3-py3-none-any.whl (90 kB)
Requirement already satisfied: idna<4,>=2.5 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (3.4)
Requirement already satisfied: certifi>=2017.4.17 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (2022.12.7)
Requirement already satisfied: charset-normalizer<4,>=2 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (3.0.1)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in e:\dreambooth0k\sd-scripts\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (1.26.14)
Collecting keras<2.11,>=2.10.0
Using cached keras-2.10.0-py2.py3-none-any.whl (1.7 MB)
Collecting google-pasta>=0.1.1
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting astunparse>=1.6.0
Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting h5py>=2.9.0
Using cached h5py-3.7.0-cp310-cp310-win_amd64.whl (2.6 MB)
Collecting six>=1.12.0
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting flatbuffers>=2.0
Using cached flatbuffers-23.1.4-py2.py3-none-any.whl (26 kB)
Collecting opt-einsum>=2.3.2
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting termcolor>=1.1.0
Using cached termcolor-2.2.0-py3-none-any.whl (6.6 kB)
Collecting tensorflow-estimator<2.11,>=2.10.0
Using cached tensorflow_estimator-2.10.0-py2.py3-none-any.whl (438 kB)
Collecting libclang>=13.0.0
Using cached libclang-15.0.6.1-py2.py3-none-win_amd64.whl (23.2 MB)
Collecting tensorflow-io-gcs-filesystem>=0.23.1
Using cached tensorflow_io_gcs_filesystem-0.29.0-cp310-cp310-win_amd64.whl (1.5 MB)
Collecting keras-preprocessing>=1.1.1
Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Collecting tensorboard
Using cached tensorboard-2.10.1-py3-none-any.whl (5.9 MB)
Collecting wrapt>=1.11.0
Using cached wrapt-1.14.1-cp310-cp310-win_amd64.whl (35 kB)
Collecting gast<=0.4.0,>=0.2.1
Using cached gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting protobuf<4,>=3.9.2
Using cached protobuf-3.19.6-cp310-cp310-win_amd64.whl (895 kB)
Collecting multidict<7.0,>=4.5
Using cached multidict-6.0.4-cp310-cp310-win_amd64.whl (28 kB)
Collecting aiosignal>=1.1.2
Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting frozenlist>=1.1.1
Using cached frozenlist-1.3.3-cp310-cp310-win_amd64.whl (33 kB)
Collecting async-timeout<5.0,>=4.0.0a3
Using cached async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting attrs>=17.3.0
Using cached attrs-22.2.0-py3-none-any.whl (60 kB)
Collecting yarl<2.0,>=1.0
Using cached yarl-1.8.2-cp310-cp310-win_amd64.whl (56 kB)
Collecting pyasn1-modules>=0.2.1
Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting rsa<5,>=3.1.4
Using cached rsa-4.9-py3-none-any.whl (34 kB)
Collecting cachetools<6.0,>=2.0.0
Using cached cachetools-5.2.1-py3-none-any.whl (9.3 kB)
Collecting requests-oauthlib>=0.7.0
Using cached requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0
Using cached pyrsistent-0.19.3-cp310-cp310-win_amd64.whl (62 kB)
Collecting pytz>=2020.1
Using cached pytz-2022.7.1-py2.py3-none-any.whl (499 kB)
Collecting python-dateutil>=2.8.1
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting scikit-learn>=0.19.1
Using cached scikit_learn-1.2.0-cp310-cp310-win_amd64.whl (8.2 MB)
Collecting networkx>=2.2
Using cached networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting tifffile>=2019.7.26
Using cached tifffile-2022.10.10-py3-none-any.whl (210 kB)
Collecting imageio>=2.4.1
Using cached imageio-2.24.0-py3-none-any.whl (3.4 MB)
Collecting PyWavelets>=1.1.1
Using cached PyWavelets-1.4.1-cp310-cp310-win_amd64.whl (4.2 MB)
Collecting colorama
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting starlette==0.22.0
Using cached starlette-0.22.0-py3-none-any.whl (64 kB)
Collecting anyio<5,>=3.4.0
Using cached anyio-3.6.2-py3-none-any.whl (80 kB)
Collecting rfc3986[idna2008]<2,>=1.3
Using cached rfc3986-1.5.0-py2.py3-none-any.whl (31 kB)
Collecting httpcore<0.17.0,>=0.15.0
Using cached httpcore-0.16.3-py3-none-any.whl (69 kB)
Collecting sniffio
Using cached sniffio-1.3.0-py3-none-any.whl (10 kB)
Collecting zipp>=0.5
Using cached zipp-3.11.0-py3-none-any.whl (6.6 kB)
Collecting mdurl~=0.1
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting mdit-py-plugins
Using cached mdit_py_plugins-0.3.3-py3-none-any.whl (50 kB)
Collecting linkify-it-py~=1.0
Using cached linkify_it_py-1.0.3-py3-none-any.whl (19 kB)
Collecting pyparsing>=2.2.1
Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting kiwisolver>=1.0.1
Using cached kiwisolver-1.4.4-cp310-cp310-win_amd64.whl (55 kB)
Collecting contourpy>=1.0.1
Using cached contourpy-1.0.7-cp310-cp310-win_amd64.whl (162 kB)
Collecting cycler>=0.10
Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting fonttools>=4.22.0
Using cached fonttools-4.38.0-py3-none-any.whl (965 kB)
Collecting h11>=0.8
Using cached h11-0.14.0-py3-none-any.whl (58 kB)
Collecting click>=7.0
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting uc-micro-py
Using cached uc_micro_py-1.0.1-py3-none-any.whl (6.2 kB)
Collecting pyasn1<0.5.0,>=0.4.6
Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting oauthlib>=3.0.0
Using cached oauthlib-3.2.2-py3-none-any.whl (151 kB)
Collecting threadpoolctl>=2.0.0
Using cached threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Collecting joblib>=1.1.1
Using cached joblib-1.2.0-py3-none-any.whl (297 kB)
Building wheels for collected packages: library, python-multipart
Building wheel for library (pyproject.toml) ... done
Created wheel for library: filename=library-0.0.0-py3-none-any.whl size=32703 sha256=395719df8f92a6697bc533686bf7d4322abde1bbb03bce83a865aad2170ef86b
Stored in directory: c:\users\%username%\appdata\local\pip\cache\wheels\ae\09\60\c29c9d92535cf560229b50fb48c90465fa0e7012efb335f830
Building wheel for python-multipart (pyproject.toml) ... done
Created wheel for python-multipart: filename=python_multipart-0.0.5-py3-none-any.whl size=31681 sha256=99ea873d00ce1f67506058f0a88fff74838aab4566fb0f2194f2f471c11cc69d
Stored in directory: c:\users\%username%\appdata\local\pip\cache\wheels\ae\3f\03\fa4bd98cd7f4a25e63b6a0b61a7a8352e0f874cd9de1f3390d
Successfully built library python-multipart
Installing collected packages: wcwidth, tokenizers, tensorboard-plugin-wit, safetensors, rfc3986, pytz, pydub, pyasn1, library, libclang, keras, flatbuffers, ffmpy, easygui, bitsandbytes, zipp, wrapt, websockets, uc-micro-py, toolz, tifffile, threadpoolctl, termcolor, tensorflow-io-gcs-filesystem, tensorflow-estimator, tensorboard-data-server, sniffio, six, scipy, rsa, regex, pyyaml, PyWavelets, pyrsistent, pyparsing, pydantic, pycryptodome, pyasn1-modules, psutil, protobuf, packaging, orjson, opt-einsum, opencv-python-headless, opencv-python, oauthlib, networkx, multidict, mdurl, markupsafe, markdown, kiwisolver, joblib, imageio, h5py, h11, grpcio, gast, ftfy, fsspec, frozenlist, fonttools, filelock, entrypoints, einops, cycler, contourpy, colorama, charset-normalizer, cachetools, attrs, async-timeout, aiofiles, absl-py, yarl, werkzeug, tqdm, torchmetrics, scikit-learn, scikit-image, python-multipart, python-dateutil, markdown-it-py, linkify-it-py, lightning-utilities, keras-preprocessing, jsonschema, jinja2, importlib-metadata, google-pasta, google-auth, fairscale, click, astunparse, anyio, aiosignal, accelerate, uvicorn, starlette, requests-oauthlib, qudida, pandas, mdit-py-plugins, matplotlib, huggingface-hub, httpcore, aiohttp, transformers, timm, httpx, google-auth-oauthlib, fastapi, diffusers, altair, albumentations, tensorboard, pytorch_lightning, gradio, tensorflow
Attempting uninstall: charset-normalizer
Found existing installation: charset-normalizer 3.0.1
Uninstalling charset-normalizer-3.0.1:
Successfully uninstalled charset-normalizer-3.0.1
Successfully installed PyWavelets-1.4.1 absl-py-1.4.0 accelerate-0.15.0 aiofiles-22.1.0 aiohttp-3.8.3 aiosignal-1.3.1 albumentations-1.3.0 altair-4.2.0 anyio-3.6.2 astunparse-1.6.3 async-timeout-4.0.2 attrs-22.2.0 bitsandbytes-0.35.0 cachetools-5.2.1 charset-normalizer-2.1.1 click-8.1.3 colorama-0.4.6 contourpy-1.0.7 cycler-0.11.0 diffusers-0.10.2 easygui-0.98.3 einops-0.6.0 entrypoints-0.4 fairscale-0.4.4 fastapi-0.89.1 ffmpy-0.3.0 filelock-3.9.0 flatbuffers-23.1.4 fonttools-4.38.0 frozenlist-1.3.3 fsspec-2023.1.0 ftfy-6.1.1 gast-0.4.0 google-auth-2.16.0 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 gradio-3.16.2 grpcio-1.51.1 h11-0.14.0 h5py-3.7.0 httpcore-0.16.3 httpx-0.23.3 huggingface-hub-0.11.1 imageio-2.24.0 importlib-metadata-6.0.0 jinja2-3.1.2 joblib-1.2.0 jsonschema-4.17.3 keras-2.10.0 keras-preprocessing-1.1.2 kiwisolver-1.4.4 libclang-15.0.6.1 library-0.0.0 lightning-utilities-0.5.0 linkify-it-py-1.0.3 markdown-3.4.1 markdown-it-py-2.1.0 markupsafe-2.1.2 matplotlib-3.6.3 mdit-py-plugins-0.3.3 mdurl-0.1.2 multidict-6.0.4 networkx-3.0 oauthlib-3.2.2 opencv-python-4.7.0.68 opencv-python-headless-4.7.0.68 opt-einsum-3.3.0 orjson-3.8.5 packaging-23.0 pandas-1.5.3 protobuf-3.19.6 psutil-5.9.4 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycryptodome-3.16.0 pydantic-1.10.4 pydub-0.25.1 pyparsing-3.0.9 pyrsistent-0.19.3 python-dateutil-2.8.2 python-multipart-0.0.5 pytorch_lightning-1.9.0 pytz-2022.7.1 pyyaml-6.0 qudida-0.0.4 regex-2022.10.31 requests-oauthlib-1.3.1 rfc3986-1.5.0 rsa-4.9 safetensors-0.2.6 scikit-image-0.19.3 scikit-learn-1.2.0 scipy-1.10.0 six-1.16.0 sniffio-1.3.0 starlette-0.22.0 tensorboard-2.10.1 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-2.10.1 tensorflow-estimator-2.10.0 tensorflow-io-gcs-filesystem-0.29.0 termcolor-2.2.0 threadpoolctl-3.1.0 tifffile-2022.10.10 timm-0.4.12 tokenizers-0.13.2 toolz-0.12.0 torchmetrics-0.11.0 tqdm-4.64.1 transformers-4.25.1 uc-micro-py-1.0.1 uvicorn-0.20.0 wcwidth-0.2.6 websockets-10.4 werkzeug-2.2.2 wrapt-1.14.1 yarl-1.8.2 zipp-3.11.0
[notice] A new release of pip available: 22.2.2 -> 22.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
(venv) PS E:\dreambooth0k\sd-scripts> pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
Collecting xformers==0.0.14.dev0
Downloading https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl (184.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 184.3/184.3 MB 5.4 MB/s eta 0:00:00
Installing collected packages: xformers
Successfully installed xformers-0.0.14.dev0
[notice] A new release of pip available: 22.2.2 -> 22.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
(venv) PS E:\dreambooth0k\sd-scripts> cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
(venv) PS E:\dreambooth0k\sd-scripts> cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
(venv) PS E:\dreambooth0k\sd-scripts> cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
(venv) PS E:\dreambooth0k\sd-scripts> accelerate config
------------------------------------------------------------------------------------------------------------------------In which compute environment are you running?
This machine
------------------------------------------------------------------------------------------------------------------------Which type of machine are you using?
No distributed training
Do you want to run your training on CPU only (even if a GPU is available)? [yes/NO]:NO
Do you wish to optimize your script with torch dynamo?[yes/NO]:NO
Do you want to use DeepSpeed? [yes/NO]: NO
What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:all
------------------------------------------------------------------------------------------------------------------------Do you wish to use FP16 or BF16 (mixed precision)?
fp16
accelerate configuration saved at C:\Users\username/.cache\huggingface\accelerate\default_config.yaml
(venv) PS E:\dreambooth0k\sd-scripts> accelerate launch --num_cpu_threads_per_process=6 "train_network.py" --enable_bucket --pretrained_model_name_or_path="E:\train\dream\koyadream1\motomodel\model.safetensors" --train_data_dir="E:\train\dream\2\img" --reg_data_dir="E:\train\dream\2\reg" --resolution=512,512 --output_dir="E:\train\dream\2\model" --use_8bit_adam --xformers --logging_dir="E:\train\dream\2\log" --save_model_as=safetensors --network_module=networks.lora --text_encoder_lr=5e-5 --unet_lr=1e-3 --network_dim=4 --output_name="last" --learning_rate="1e-5" --lr_scheduler="cosine" --lr_warmup_steps="68" --train_batch_size="1" --max_train_steps="680" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --seed="1234" --cache_latents --xformers --use_8bit_adam
prepare tokenizer
Use DreamBooth method.
prepare train images.
found directory 40_asd 1girl contains 17 image files
680 train images with repeating.
prepare reg images.
found directory 1_1girl contains 1000 image files
1000 reg images.
some of reg images are not used / 正則化画像の数が多いので、一部使用されない正則化画像があります
loading image sizes.
100%|███████████████████████████████████████████████████████████████████████████████| 697/697 [00:01<00:00, 353.34it/s]
make buckets
number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む)
bucket 0: resolution (256, 832), count: 0
bucket 1: resolution (256, 896), count: 0
bucket 2: resolution (256, 960), count: 0
bucket 3: resolution (256, 1024), count: 0
bucket 4: resolution (320, 704), count: 0
bucket 5: resolution (320, 768), count: 0
bucket 6: resolution (384, 640), count: 0
bucket 7: resolution (448, 576), count: 0
bucket 8: resolution (512, 512), count: 1360
bucket 9: resolution (576, 448), count: 0
bucket 10: resolution (640, 384), count: 0
bucket 11: resolution (704, 320), count: 0
bucket 12: resolution (768, 320), count: 0
bucket 13: resolution (832, 256), count: 0
bucket 14: resolution (896, 256), count: 0
bucket 15: resolution (960, 256), count: 0
bucket 16: resolution (1024, 256), count: 0
mean ar error (without repeats): 0.0
prepare accelerator
Using accelerator 0.15.0 or above.
load StableDiffusion checkpoint
loading u-net:
loading vae:
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.layer_norm1.bias', 'text_projection.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'logit_scale', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'visual_projection.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.23.self_attn.q_proj.bias']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
loading text encoder:
Replace CrossAttention.forward to use xformers
caching latents.
100%|████████████████████████████████████████████████████████████████████████████████| 697/697 [01:01<00:00, 11.37it/s]
import network module: networks.lora
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
enable LoRA for text encoder
enable LoRA for U-Net
prepare optimizer, data loader etc.
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: Loading binary E:\dreambooth0k\sd-scripts\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll...
use 8-bit Adam optimizer
running training / 学習開始
num train images * repeats / 学習画像の数×繰り返し回数: 680
num reg images / 正則化画像の数: 1000
num batches per epoch / 1epochのバッチ数: 1360
num epochs / epoch数: 1
batch size per device / バッチサイズ: 1
total train batch size (with parallel & distributed & accumulation) / 総バッチサイズ(並列学習、勾配合計含む): 1
gradient accumulation steps / 勾配を合計するステップ数 = 1
total optimization steps / 学習ステップ数: 680
steps: 0%| | 0/680 [00:00, ?it/s]epoch 1/1
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\tar\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\tar\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "C:\Users\tar\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\tar\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "C:\Users\tar\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "C:\Users\tar\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "C:\Users\tar\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "E:\dreambooth0k\sd-scripts\train_network.py", line 8, in
import torch
File "E:\dreambooth0k\sd-scripts\venv\lib\site-packages\torch\__init__.py", line 129, in
raise err
OSError: [WinError 1455] ページング ファイルが小さすぎるため、この操作を完了できません。 Error loading "E:\dreambooth0k\sd-scripts\venv\lib\site-packages\torch\lib\cufft64_10.dll" or one of its dependencies.
Have you tried increasing you windows paging system? How much RAM do you have. Less than 32GB of system memory you will require a large paging file because training take a lot of system RAM and GPU VRAM
Thank you very much for your support. I was able to get the program to work by setting up the paging file and restarting. It turns out I had set it too low, to 16MB in order to free up space on the C drive, which was not ideal. I increased the paging file to 9048MB and was able to run the program at the lightest setting.
I am still a little concerned that the GPU is almost 100%. Is it really possible to run this program with 8GB of VRAM? I also got a bit carried away and tried a slightly heavier setting (network_dim value from 4 to 128) and this did not work. I am wondering if I need to increase the paging file more, or if it is not possible on my machine in the first place. I will experiment to find the optimal settings.
Again, thank you for your support. It is much appreciated.
I have an RTX3060 with 12GB of VRAM and LORA is giving me an error that I don't have enough paging files and I can't learn. Am I doing something wrong? I would appreciate it if you could let me know.
●What we have already done
Reinstalled venv and pip install Clean install Change from multiple monitors to just one Turned off chrome running at the same time and changed it to LORA only. update
●Result
None of those things worked and I keep getting the same error.
●Situation
When I checked the task manager, 12GB of VRAM is being used 100% at the time of loading image sizes. Then, the moment the console says training start, the VRAM usage goes to 0% and the following error message is displayed.
Operating Environment Edition Windows 11 Home version 22H2 Processor 12th generation Intel(R) Core(TM) i5-12400F 2.50 GHz RAM 64.0 GB (63.8 GB available) System type 64-bit OS, x64-based processor Graphics board is RTX3060 VRAM 12GB. Due to the use of RAM disks, actual usable RAM is 53 GB.
Console executed: powershell python version: 3.10.6 Commit hash: 785c4d8
error message
List of packages installed in the virtual environment:.
I am a person who does not know anything about program. I would appreciate it if you could let me know if there is any other necessary information I need to provide in order to solve the problem.