After latest update, whenever I'm trying to train a beta network, I keep getting this error no matter what I do.
Cancel
venv "D:\A1111\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8
Installing requirements for Web UI
Installing requirements for Batch Face Swap
#######################################################################################################
Initializing Dreambooth
If submitting an issue on github, please provide the below text for debugging purposes:
Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Dreambooth revision: 43ae9d55531004f1dedaea7ac2443e9b16739913
SD-WebUI revision: 0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8
Checking Dreambooth requirements...
[+] bitsandbytes version 0.35.0 installed.
[+] diffusers version 0.10.2 installed.
[+] transformers version 4.25.1 installed.
[+] xformers version 0.0.16rc425 installed.
[+] torch version 1.13.1+cu117 installed.
[+] torchvision version 0.14.1+cu117 installed.
#######################################################################################################
Installing imageio-ffmpeg requirement for depthmap script
Installing pyqt5 requirement for depthmap script
Launching Web UI with arguments: --autolaunch --xformers
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
Hypernetwork-MonkeyPatch-Extension found!
SD-Webui API layer loaded
Loading weights [cc6cb27103] from D:\A1111\stable-diffusion-webui\models\Stable-diffusion\SD1.5model.ckpt
Creating model from config: D:\A1111\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 3.5s (create model: 0.4s, apply weights to model: 0.6s, apply half(): 0.6s, move model to device: 0.8s, load textual inversion embeddings: 0.9s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Using Beta Scheduler
Using learn rate decay(per cycle) of 1.0
Save when converges : False
Generate image when converges : False
Training at rate of 0.0002 until step 1000
D:\A1111\stable-diffusion-webui\models\hypernetworks\BRAMBLE.pt
Loading hypernetwork BRAMBLE
[1.0, 3.0, 1.0]
Calculating sha256 for D:\A1111\stable-diffusion-webui\models\hypernetworks\BRAMBLE.pt: 104d7db9e128dd8aab7a16db1485f4aeb2e1f919497df8dca4c7a0b8143be06b
No saved optimizer exists in checkpoint
Training at rate of 0.0002 until step 1000
Dataset seed was set to f5813355383601171383
Preparing dataset...
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:03<00:00, 4.68it/s]
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "D:\A1111\stable-diffusion-webui\extensions\Hypernetwork-MonkeyPatch-Extension\patches\external_pr\hypernetwork.py", line 372, in train_hypernetwork
for j, batch in enumerate(dl):
File "D:\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\utils\data\dataloader.py", line 628, in __next__
data = self._next_data()
File "D:\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\utils\data\dataloader.py", line 671, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "D:\A1111\stable-diffusion-webui\venv\lib\site-packages\torch\utils\data\_utils\fetch.py", line 61, in fetch
return self.collate_fn(data)
File "D:\A1111\stable-diffusion-webui\extensions\Hypernetwork-MonkeyPatch-Extension\patches\external_pr\dataset.py", line 260, in collate_wrapper
return BatchLoader(batch)
File "D:\A1111\stable-diffusion-webui\extensions\Hypernetwork-MonkeyPatch-Extension\patches\external_pr\dataset.py", line 249, in __init__
self.weight = torch.stack([entry.weight for entry in data]).squeeze(1)
TypeError: expected Tensor as element 0 in argument 0, but got NoneType
Applying xformers cross attention optimization.
After latest update, whenever I'm trying to train a beta network, I keep getting this error no matter what I do.