TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.54k stars 1.31k forks source link

Error when trying to use BLIP, Dreambooth, Hypernetwork and Embeddings. SD 2.1 - Automatic1111 Updated. #914

Open artificialguybr opened 1 year ago

artificialguybr commented 1 year ago

First i have this problem when trying to use BLIP for Captions in Automatic1111

0% 0/770 [00:01<?, ?it/s] Error completing request Arguments: ('/content/gdrive/MyDrive/Dataset', '/content/gdrive/MyDrive/DatasetFinal', 768, 768, 'ignore', False, False, True, False, 0.5, 0.2, True, 0.9, 0.15, 0.5, False) {} Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f res = list(func(*args, **kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f res = func(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/ui.py", line 19, in preprocess modules.textual_inversion.preprocess.preprocess(*args) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 23, in preprocess preprocess_work(process_src, process_dst, process_width, process_height, preprocess_txt_action, process_flip, process_split, process_caption, process_caption_deepbooru, split_threshold, overlap_ratio, process_focal_crop, process_focal_crop_face_weight, process_focal_crop_entropy_weight, process_focal_crop_edges_weight, process_focal_crop_debug) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 188, in preprocess_work save_pic(focal, index, params, existing_caption=existing_caption) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 83, in save_pic save_pic_with_caption(image, index, params, existing_caption=existing_caption) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/preprocess.py", line 52, in save_pic_with_caption caption += shared.interrogator.generate_caption(image) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/interrogate.py", line 133, in generate_caption caption = self.blip_model.generate(gpu_image, sample=False, num_beams=shared.opts.interrogate_clip_num_beams, min_length=shared.opts.interrogate_clip_min_length, max_length=shared.opts.interrogate_clip_max_length) File "/content/gdrive/MyDrive/sd/stablediffusion/src/blip/models/blip.py", line 156, in generate outputs = self.text_decoder.generate(input_ids=input_ids, File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 1268, in generate self._validate_model_kwargs(model_kwargs.copy()) File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 964, in _validate_model_kwargs raise ValueError( ValueError: The followingmodel_kwargsare not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)

I have this error when i'm trying to run Dreambooth in Automatic1111 Webui.

First part of the error when I try to start a new entry with Dreambooth installed on SD. (But it starts normally, only this error appears) `Error loading script: api.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 184, in load_scripts module = script_loading.load_module(scriptfile.path) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module exec(compiled, module.dict) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/api.py", line 10, in from extensions.sd_dreambooth_extension.dreambooth.sd_to_diff import extract_checkpoint File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/sd_to_diff.py", line 36, in from diffusers import ( ImportError: cannot import name 'HeunDiscreteScheduler' from 'diffusers' (/usr/local/lib/python3.8/dist-packages/diffusers/init.py)

Error loading script: main.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 184, in load_scripts module = script_loading.load_module(scriptfile.path) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module exec(compiled, module.dict) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/main.py", line 5, in from extensions.sd_dreambooth_extension.dreambooth.diff_to_sd import compile_checkpoint File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/diff_to_sd.py", line 214, in def convert_text_enc_state_dict_v20(text_enc_dict: dict[str, torch.Tensor]): TypeError: 'type' object is not subscriptable

LatentDiffusion: Running in v-prediction mode DiffusionWrapper has 865.91 M params. Loading weights [6bccbcc6] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/model.ckpt Applying xformers cross attention optimization. Model loaded. Loaded a total of 0 textual inversion embeddings. Embeddings: `

Then when I go into the GUI, it doesn't show the Dreambooth tab.

artificialguybr commented 1 year ago

I tried embedding train and got this error:

File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 328, in train_embedding loss = shared.sd_model(x, c)[0] / gradient_step File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 846, in forward return self.p_losses(x, c, t, *args, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 903, in p_losses logvar_t = self.logvar[t].to(self.device) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

artificialguybr commented 1 year ago

And this error with hypernetworks

Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f res = list(func(*args, **kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f res = func(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/hypernetworks/ui.py", line 50, in train_hypernetwork hypernetwork, filename = modules.hypernetworks.hypernetwork.train_hypernetwork(*args) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/hypernetworks/hypernetwork.py", line 444, in train_hypernetwork hypernetwork.train_mode() AttributeError: 'Hypernetwork' object has no attribute 'train_mode'

7wpanc24 commented 1 year ago

I'm having the same issue with embeddings. Worked fine 2 days ago. I deleted the sd folder and reinstalled, but still getting the same error:

File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 328, in train_embedding loss = shared.sd_model(x, c)[0] / gradient_step File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, *kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 846, in forward return self.p_losses(x, c, t, args, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 903, in p_losses logvar_t = self.logvar[t].to(self.device) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

TheLastBen commented 1 year ago

the errors come from files untouched by the colab, so you might need to open an issue in A1111 repo

7wpanc24 commented 1 year ago

can do. thank you

artificialguybr commented 1 year ago

@TheLastBen I was able to fix Hypernetworks and Embedding using this fix.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5523#issuecomment-1343041303 I've added self.logvar = self.logvar.to(self.device) in ddpm.py above line 903 before starting anything.

However, when it goes to generate the example images, they turn black. The normal generation in txt2img is normal. In the automatic's repo it says that:

If 2.0 or 2.1 is generating black images, enable full precision with --no-half or try using the --xformers optimization.

However, I do not know how to add this to the part only to the generation of previews of Hypernetworks and Embedding. You know and could help me?

roperi commented 1 year ago

I also got the same cannot import name 'HeunDiscreteScheduler' from 'diffusers' error message after restarting the WebUI after installing the Dreambooth extension. This results in no Dreambooth tab being shown.

~I get rid of the error if I run export REQS_FILE="./extensions/sd_dreambooth_extension/requirements.txt" before the last cell in the Colab notebook (i.e. Start stable-diffusion) BUT then I get another error:~

~Error loading script: api.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 195, in load_scripts module = script_loading.load_module(scriptfile.path) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module exec(compiled, module.dict) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/api.py", line 9, in from extensions.sd_dreambooth_extension.dreambooth import dreambooth File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/dreambooth.py", line 12, in from diffusers.utils import logging as dl File "/usr/local/lib/python3.8/dist-packages/diffusers/init.py", line 32, in from .pipelines import ( ModuleNotFoundError: No module named 'diffusers.pipelines'~

EDIT: I still get the same error as OP.

roperi commented 1 year ago

@jvkap

EDIT: The following solved the 'Dreambooth tab not showing in the web UI' problem.

I solved the HeunDiscreteScheduler error by !pip install git+https://github.com/huggingface/diffusers. This should install a compatible version of Diffusers and fix the error.

But if then throws a transformers error do !pip install git+https://github.com/huggingface/transformers. This should install a compatible version of Transformers.

Having done that you should be left with one last error message :

def convert_text_enc_state_dict_v20(text_enc_dict: dict[str, torch.Tensor]):
TypeError: 'type' object is not subscriptable

This seem to be a Python3.8 problem (i think doesn't happen with Python 3.11). So open the file "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/diff_to_sd.py" and then delete the type hint from the parameter of the function so change def convert_text_enc_state_dict_v20(text_enc_dict: dict[str, torch.Tensor]): to def convert_text_enc_state_dict_v20(text_enc_dict). Also change def convert_text_enc_state_dict(text_enc_dict: dict[str, torch.Tensor]) to def convert_text_enc_state_dict(text_enc_dict):. These two deletions are 100% safe.

Save those two edited files, re-run everything and voila! You should finally get the Dreambooth tab.

PS. I'm using model 1.5 but this should work for 2.X

thinkyhead commented 1 year ago
pip install git+https://github.com/huggingface/diffusers

GitHub has apparently discontinued HTTPS authentication, so watch out for that. I had to use…

pip install git+ssh://git@github.com/huggingface/diffusers
roperi commented 1 year ago

Thanks for the heads up, @thinkyhead.

I encountered another problem when training in Dreambooth. And again the error is thrown because of the type hint in the main function in the train_dreambooth.py file. So go to the Dreambooth extensions folder in your stable-diffusion directory, and look for the train_dreambooth.py file. Open it and change def main(args: DreamboothConfig, memory_record, use_subdir, lora_model=None, lora_alpha=1) -> tuple[DreamboothConfig, dict, str]: to def main(args: DreamboothConfig, memory_record, use_subdir, lora_model=None, lora_alpha=1):.

mediocreatmybest commented 1 year ago

Ok, I won't pretend this is perfect. Seems to allow training to start for me, just testing it now. But this might allow someone to do a quick copy paste into existing Colab.

Based on the info in here, I've created two additional sections.

#@markdown # Tweaks #1
#@markdown Some tweaks to enable some Automatic1111 extensions, installing updated diffusers and transformers.

#@markdown Also importing *subprocess with getoutput*
from subprocess import getoutput

!pip install git+https://github.com/huggingface/diffusers

!pip install git+https://github.com/huggingface/transformers
#@markdown # Tweaks #2
#@markdown Some tweaks to enable some Automatic1111 extensions
#@markdown from the discussion:

#@markdown https://github.com/TheLastBen/fast-stable-diffusion/issues/914
remove_xformers = False #@param {type:"boolean"}
#@markdown  - Remove xFormers? Only remove if required.

if remove_xformers:
  !pip uninstall xformers

replace_diff_to_sd = False #@param {type:"boolean"}
#@markdown  - Replace diff_to_sd.py within dreambooth extension

replace_train_dreambooth = False #@param {type:"boolean"}
#@markdown  - Replace train_dreambooth.py within dreambooth extension

if replace_diff_to_sd:
  !wget https://raw.githubusercontent.com/mediocreatmybest/gaslightingeveryone/main/Scripts/scraps/StableDiffusion-misc/diff_to_sd.py -O /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/diff_to_sd.py

if replace_train_dreambooth:
  !wget https://raw.githubusercontent.com/mediocreatmybest/gaslightingeveryone/main/Scripts/scraps/StableDiffusion-misc/train_dreambooth.py -O /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py

reset_dreambooth_ext = False #@param {type:"boolean"}
#@markdown  - Removes modified files and updates dreambooth extension

if reset_dreambooth_ext:
  with capture.capture_output() as cap:
    !rm /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/diff_to_sd.py  
    !rm /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py
    %cd /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension
    print('')
    !git pull
    clear_output()
    print('DONE !')

I did upload the modification if it doesn't display here correctly.

https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/Scripts/scraps/StableDiffusion-misc/tweaked_fast_stable_diffusion_AUTOMATIC1111.ipynb

mediocreatmybest commented 1 year ago

Did a training session for 10,000 steps with LORA.

Got the following at the end.


Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 977, in main
    for step, batch in enumerate(train_dataloader):
  File "/usr/local/lib/python3.8/dist-packages/accelerate/data_loader.py", line 357, in __iter__
    next_batch = next(dataloader_iter)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 671, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/SuperDataset.py", line 257, in __getitem__
    concept_data = self.concepts[c_index]
IndexError: list index out of range
|=======================================

I also can't see if or where any models were saved?

mediocreatmybest commented 1 year ago

saved?

Ok, did a quick check compared to my local install. It did train for the full 10,000 steps but didn't actually save any checkpoints or pt files etc

roperi commented 1 year ago

@mediocreatmybest

Thanks! Very thankful for your notebook update.

I also had Dreambooth training for an hour just for the last moment to throw an error. Didn't save anything.

In my case I needed an updated version of accelerate. So I'd be great if you can add this one line (as opposed to the two lines you added based on my previous suggestion):

!pip install -U diffusers transformers accelerate

mediocreatmybest commented 1 year ago

@mediocreatmybest

Thanks! Very thankful for your notebook update.

I also had Dreambooth training for an hour just for the last moment to throw an error. Didn't save anything.

In my case I needed an updated version of accelerate. So I'd be great if you can add this one line (as opposed to the two lines you added based on my previous suggestion):

!pip install -U diffusers transformers accelerate

No worries, I've updated the file but I'm getting a different error now.

Returning result: Exception training model: main() got an unexpected keyword argument 'lora_txt_alpha'

Not sure if that is related to the modified files or not for dreambooth?

mediocreatmybest commented 1 year ago

line

Ok, this is pretty wonky and xformers is super unreliable to build and distribute via a wheel. I've compiled it about 4 times to build a wheel but it never detects it when the colab is restarted.

Only way I could get it to work was via GitHub build and install with pip, takes over an hour to run, so I don't think it's worth it.

Anyway, totally unsupported and no doubt could break.

Might be useful to someone.

Behold the modified colab based on fast stable diffusion with Python 3.10 and Automatic1111, Cludge Edition

https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/Colab/fast_stable_diffusion_AUTOMATIC1111_CludgeEdition.ipynb

Results may vary.

mediocreatmybest commented 1 year ago

line

Ok, this is pretty wonky and xformers is super unreliable to build and distribute via a wheel. I've compiled it about 4 times to build a wheel but it never detects it when the colab is restarted.

Only way I could get it to work was via GitHub build and install with pip, takes over an hour to run, so I don't think it's worth it.

Anyway, totally unsupported and no doubt could break.

Might be useful to someone.

Behold the modified colab based on fast stable diffusion with Python 3.10 and Automatic1111, Cludge Edition

https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/Colab/fast_stable_diffusion_AUTOMATIC1111_CludgeEdition.ipynb

Results may vary.

Xformers fixed. --pre now available. Only just saw it.

roperi commented 1 year ago

@mediocreatmybest:

I haven't messed with LORA yet so I don't know why you are getting the error message. I didn't get any and I have Dreambooth trained two models with success. Did you fine tune anything with LORA?

Your error message: Returning result: Exception training model: main() got an unexpected keyword argument 'lora_txt_alpha

...seems to be related to an old version of whatever file has that main() function. Perhaps you could try pip installing (with -U) its related package and load this function in the Colab Notebook and run it with the same keyword argument and see if it still throws the error before doing a hours-long training.

EDIT: Sorry, didn't notice you already told us you fine tuned using LORA.

mediocreatmybest commented 1 year ago

@mediocreatmybest:

I haven't messed with LORA yet so I don't know why you are getting the error message. I didn't get any and I have Dreambooth trained two models with success. Did you fine tune anything with LORA?

Your error message: Returning result: Exception training model: main() got an unexpected keyword argument 'lora_txt_alpha

...seems to be related to an old version of whatever file has that main() function. Perhaps you could try pip installing (with -U) its related package and load this function in the Colab Notebook and run it with the same keyword argument and see if it still throws the error before doing a hours-long training.

EDIT: Sorry, didn't notice you already told us you fine tuned using LORA.

Yeah seems to be related to LORA and SDv2. I've got it working correctly with SDv1 andSDv2 no worries without LORA. As mentioned above, I got python3.10/Xformers working on that Cludged colab I posted above. No need to mod the files to get it working with 3.8. Not perfect, and might be a better way to get it to work. Works for me now quite well though.

roperi commented 1 year ago

@mediocreatmybest

Amazing! I'll test it and get back if anything goes wrong. Thanks!

artificialguybr commented 1 year ago

@mediocreatmybest: I haven't messed with LORA yet so I don't know why you are getting the error message. I didn't get any and I have Dreambooth trained two models with success. Did you fine tune anything with LORA? Your error message: Returning result: Exception training model: main() got an unexpected keyword argument 'lora_txt_alpha ...seems to be related to an old version of whatever file has that main() function. Perhaps you could try pip installing (with -U) its related package and load this function in the Colab Notebook and run it with the same keyword argument and see if it still throws the error before doing a hours-long training. EDIT: Sorry, didn't notice you already told us you fine tuned using LORA.

Yeah seems to be related to LORA and SDv2. I've got it working correctly with SDv1 andSDv2 no worries without LORA. As mentioned above, I got python3.10/Xformers working on that Cludged colab I posted above. No need to mod the files to get it working with 3.8. Not perfect, and might be a better way to get it to work. Works for me now quite well though.

Lora was already made for SD 2.1. You can use an exclusive Colab for LORA here.

https://colab.research.google.com/drive/1iSFDpRBKEWr2HLlz243rbym3J2X95kcy?usp=sharing#scrollTo=RXhqKsN8cEop

mediocreatmybest commented 1 year ago

@mediocreatmybest:

I haven't messed with LORA yet so I don't know why you are getting the error message. I didn't get any and I have Dreambooth trained two models with success. Did you fine tune anything with LORA?

Your error message: Returning result: Exception training model: main() got an unexpected keyword argument 'lora_txt_alpha

...seems to be related to an old version of whatever file has that main() function. Perhaps you could try pip installing (with -U) its related package and load this function in the Colab Notebook and run it with the same keyword argument and see if it still throws the error before doing a hours-long training.

EDIT: Sorry, didn't notice you already told us you fine tuned using LORA.

Yeah seems to be related to LORA and SDv2. I've got it working correctly with SDv1 andSDv2 no worries without LORA. As mentioned above, I got python3.10/Xformers working on that Cludged colab I posted above. No need to mod the files to get it working with 3.8. Not perfect, and might be a better way to get it to work. Works for me now quite well though.

Lora was already made for SD 2.1.

You can use an exclusive Colab for LORA here.

https://colab.research.google.com/drive/1iSFDpRBKEWr2HLlz243rbym3J2X95kcy?usp=sharing#scrollTo=RXhqKsN8cEop

Thanks! Yeah this was an attempt to use the AUTOMATIC1111 colab with Lora since it's built into the dreambooth extension, so I suspect it could be an issue with how it's implemented in the extension, since I get the same error on my local install.

UsamaKenway commented 1 year ago

line

Ok, this is pretty wonky and xformers is super unreliable to build and distribute via a wheel. I've compiled it about 4 times to build a wheel but it never detects it when the colab is restarted.

Only way I could get it to work was via GitHub build and install with pip, takes over an hour to run, so I don't think it's worth it.

Anyway, totally unsupported and no doubt could break.

Might be useful to someone.

Behold the modified colab based on fast stable diffusion with Python 3.10 and Automatic1111, Cludge Edition

https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/Colab/fast_stable_diffusion_AUTOMATIC1111_CludgeEdition.ipynb

Results may vary.

You may just install xformer fromhere : !pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15+f82722f.d20221217-cp310-cp310-linux_x86_64.whl Every person doesnt have to build. one may built it on same type of OS and python version and make .whl file.

mediocreatmybest commented 1 year ago

line

Ok, this is pretty wonky and xformers is super unreliable to build and distribute via a wheel. I've compiled it about 4 times to build a wheel but it never detects it when the colab is restarted.

Only way I could get it to work was via GitHub build and install with pip, takes over an hour to run, so I don't think it's worth it.

Anyway, totally unsupported and no doubt could break.

Might be useful to someone.

Behold the modified colab based on fast stable diffusion with Python 3.10 and Automatic1111, Cludge Edition

https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/Colab/fast_stable_diffusion_AUTOMATIC1111_CludgeEdition.ipynb

Results may vary.

You may just install xformer fromhere :

!pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15+f82722f.d20221217-cp310-cp310-linux_x86_64.whl

Every person doesnt have to build. one may built it on same type of OS and python version and make .whl file.

Thanks! Xformers also have a pre version out now on pip. pip install --pre xformers

In the script I updated it to that version, seems to run fine. 👍

artificialguybr commented 1 year ago

Ok. So we fixed the Textual Inversion part, but we're still buggy in Lora and Dreambooth. Is that correct?

It would be interesting to have help from @d8ahazard for that Dreambooth issue.

And @cloneofsimo for Lora problem.

Would anyone be interested in opening an issue in their repo?

cloneofsimo commented 1 year ago

Wait what? I didn't know this even existed. You guys have adapted LoRA here?

mediocreatmybest commented 1 year ago

Wait what? I didn't know this even existed. You guys have adapted LoRA here?

Yep! d8ahazard has it implemented in his dreambooth extension. I've successfully gotten it working with SD1.x but it keeps crashing with SD2.x, so I'm not sure if it is something either A) I'm doing incorrectly, B) My dodgy Colab, or C) something not working correctly in the extension. I honestly haven't flagged it as an issue until I can test it on a local install. My local laptop doesn't obviously have enough vRAM even with an NVIDIA GPU.

roperi commented 1 year ago

Ok. So we fixed the Textual Inversion part, but we're still buggy in Lora and Dreambooth. Is that correct?

It would be interesting to have help from @d8ahazard for that Dreambooth issue.

And @cloneofsimo for Lora problem.

Would anyone be interested in opening an issue in their repo?

Dreambooth works for me but only after making the changes I suggested above.

ekeric13 commented 1 year ago

@mediocreatmybest

Trying out your script and I am getting this error:

Error running install.py for extension /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension.
Command: "/usr/bin/python3" "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/install.py"
Error code: 1
stdout: <empty>
stderr: Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/install.py", line 10, in <module>
    from launch import run
ModuleNotFoundError: No module named 'launch'

Seems to be failing here: https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/install.py#L10

I have no idea what that package is suppose to be and it is not part of the python stdlib. Do you see that line of code in your google drive? Are you using an older version of the extension?

d8ahazard commented 1 year ago

Hey all!

So, I think the issue here (at least from the first post) is that the latest transformers version has some check that throws an issue if it finds extra kwargs.

"/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 964, in _validate_model_kwargs raise ValueError( ValueError: The followingmodel_kwargs are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)

I had that patched out using the "xattention.py" file in my repo, which is just sort of a generic catchall for monkey-patching various bugs I found along the way. But, I bumped my diffusers and transformers versions to match the official diffusers repo (as of ver 0.10.2), and didn't have to monkey-patch this any more. So, I think the right answer is "check the diffusers and transfomers" versions.

If I missed some other issue in the discussion, throw me an error message and I'll let you know if it's anything else I hacked in to fix stuff.

@ekeric13 - "Launch" refers to launch.py from the auto1111 script. It's just a wrapper for the run_command call, which I should probably just remove and add my own, as I'm already working to make less of the extension depend directly on Auto1111, just for compatibility and sanity's sake. ;)

mediocreatmybest commented 1 year ago

Hey all!

So, I think the issue here (at least from the first post) is that the latest transformers version has some check that throws an issue if it finds extra kwargs.

"/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 964, in _validate_model_kwargs raise ValueError( ValueError: The followingmodel_kwargs are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)

I had that patched out using the "xattention.py" file in my repo, which is just sort of a generic catchall for monkey-patching various bugs I found along the way. But, I bumped my diffusers and transformers versions to match the official diffusers repo (as of ver 0.10.2), and didn't have to monkey-patch this any more. So, I think the right answer is "check the diffusers and transfomers" versions.

If I missed some other issue in the discussion, throw me an error message and I'll let you know if it's anything else I hacked in to fix stuff.

@ekeric13 - "Launch" refers to launch.py from the auto1111 script. It's just a wrapper for the run_command call, which I should probably just remove and add my own, as I'm already working to make less of the extension depend directly on Auto1111, just for compatibility and sanity's sake. ;)

Awesome, Thanks!

I'll give it a shot and have a look. ekeric13 The script I did was a bit of a cludge to avoid some of the issues with Python3.8 until something was fixed. It wasn't perfect and really just did some fudging around TheLastBens Colab, ultimatly the ideal solution is to have it working with Colab, as it is currently my only (and others) option to get access to a GPU with 16GB of vRAM.

Also for those that want to try update the extensions before you launch Automatic1111 you can add this into a Cell with this colab and do a git pull on each extension.


%cd /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions
!touch '/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/put extensions here.txt'
!rm '/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/put extensions here.txt'
for dirname in os.listdir('/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions'):
    gitpull = ['git', 'pull']
    result = subprocess.Popen(gitpull, cwd=f'{dirname}')
!touch '/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/put extensions here.txt'

Seems to work for me. So I guess use at your own risk?

mediocreatmybest commented 1 year ago

Hey all!

So, I think the issue here (at least from the first post) is that the latest transformers version has some check that throws an issue if it finds extra kwargs.

"/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 964, in _validate_model_kwargs raise ValueError( ValueError: The followingmodel_kwargs are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)

I had that patched out using the "xattention.py" file in my repo, which is just sort of a generic catchall for monkey-patching various bugs I found along the way. But, I bumped my diffusers and transformers versions to match the official diffusers repo (as of ver 0.10.2), and didn't have to monkey-patch this any more. So, I think the right answer is "check the diffusers and transfomers" versions.

If I missed some other issue in the discussion, throw me an error message and I'll let you know if it's anything else I hacked in to fix stuff.

@ekeric13 - "Launch" refers to launch.py from the auto1111 script. It's just a wrapper for the run_command call, which I should probably just remove and add my own, as I'm already working to make less of the extension depend directly on Auto1111, just for compatibility and sanity's sake. ;)

Looks like it is still an issue even when updating transformers in the standard colab as mentioned this seems to be mainly a python 3.8 issue?

Dreambooth API layer loaded
Error loading script: main.py
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 195, in load_scripts
    module = script_loading.load_module(scriptfile.path)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
    exec(compiled, module.__dict__)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/main.py", line 5, in <module>
    from extensions.sd_dreambooth_extension.dreambooth.diff_to_sd import compile_checkpoint
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/diff_to_sd.py", line 214, in <module>
    def convert_text_enc_state_dict_v20(text_enc_dict: dict[str, torch.Tensor]):
TypeError: 'type' object is not subscriptable
d8ahazard commented 1 year ago

Oooohhh...

I wonder, is it the type hint that's making it mad?

You could try editing line 214 to read

def convert_text_enc_state_dict_v20(text_enc_dict):

Removing the hint. I bet that fixes it.

On Mon, Dec 19, 2022 at 9:02 PM mediocreatmybest @.***> wrote:

Hey all!

So, I think the issue here (at least from the first post) is that the latest transformers version has some check that throws an issue if it finds extra kwargs.

"/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 964, in _validate_model_kwargs raise ValueError( ValueError: The followingmodel_kwargs are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)

I had that patched out using the "xattention.py" file in my repo, which is just sort of a generic catchall for monkey-patching various bugs I found along the way. But, I bumped my diffusers and transformers versions to match the official diffusers repo (as of ver 0.10.2), and didn't have to monkey-patch this any more. So, I think the right answer is "check the diffusers and transfomers" versions.

If I missed some other issue in the discussion, throw me an error message and I'll let you know if it's anything else I hacked in to fix stuff.

@ekeric13 https://github.com/ekeric13 - "Launch" refers to launch.py from the auto1111 script. It's just a wrapper for the run_command call, which I should probably just remove and add my own, as I'm already working to make less of the extension depend directly on Auto1111, just for compatibility and sanity's sake. ;)

Looks like it is still an issue even when updating transformers in the standard colab as mentioned this seems to be mainly a python 3.8 issue?

Dreambooth API layer loaded Error loading script: main.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 195, in load_scripts module = script_loading.load_module(scriptfile.path) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module exec(compiled, module.dict) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/main.py", line 5, in from extensions.sd_dreambooth_extension.dreambooth.diff_to_sd import compile_checkpoint File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/diff_to_sd.py", line 214, in def convert_text_enc_state_dict_v20(text_enc_dict: dict[str, torch.Tensor]): TypeError: 'type' object is not subscriptable

— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/914#issuecomment-1358774551, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMO4NC5TTYN22WVZUD5FADWOEOTVANCNFSM6AAAAAASXLDMKI . You are receiving this because you were mentioned.Message ID: @.***>

mediocreatmybest commented 1 year ago

Oooohhh... I wonder, is it the type hint that's making it mad? You could try editing line 214 to read def convert_text_enc_state_dict_v20(text_enc_dict): Removing the hint. I bet that fixes it. On Mon, Dec 19, 2022 at 9:02 PM mediocreatmybest @.> wrote: Hey all! So, I think the issue here (at least from the first post) is that the latest transformers version has some check that throws an issue if it finds extra kwargs. "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 964, in _validate_model_kwargs raise ValueError( ValueError: The followingmodel_kwargs are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list) I had that patched out using the "xattention.py" file in my repo, which is just sort of a generic catchall for monkey-patching various bugs I found along the way. But, I bumped my diffusers and transformers versions to match the official diffusers repo (as of ver 0.10.2), and didn't have to monkey-patch this any more. So, I think the right answer is "check the diffusers and transfomers" versions. If I missed some other issue in the discussion, throw me an error message and I'll let you know if it's anything else I hacked in to fix stuff. @ekeric13 https://github.com/ekeric13 - "Launch" refers to launch.py from the auto1111 script. It's just a wrapper for the run_command call, which I should probably just remove and add my own, as I'm already working to make less of the extension depend directly on Auto1111, just for compatibility and sanity's sake. ;) Looks like it is still an issue even when updating transformers in the standard colab as mentioned this seems to be mainly a python 3.8 issue? Dreambooth API layer loaded Error loading script: main.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 195, in load_scripts module = script_loading.load_module(scriptfile.path) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module exec(compiled, module.dict) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/main.py", line 5, in from extensions.sd_dreambooth_extension.dreambooth.diff_to_sd import compile_checkpoint File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/diff_to_sd.py", line 214, in def convert_text_enc_state_dict_v20(text_enc_dict: dict[str, torch.Tensor]): TypeError: 'type' object is not subscriptable — Reply to this email directly, view it on GitHub <#914 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMO4NC5TTYN22WVZUD5FADWOEOTVANCNFSM6AAAAAASXLDMKI . You are receiving this because you were mentioned.Message ID: @.>

Thanks! Restarting the whole colab, seems to have cached the original scripts. I'll let you know shortly.

mediocreatmybest commented 1 year ago
Dreambooth API layer loaded
Error loading script: main.py
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 195, in load_scripts
    module = script_loading.load_module(scriptfile.path)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
    exec(compiled, module.__dict__)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/main.py", line 5, in <module>
    from extensions.sd_dreambooth_extension.dreambooth.diff_to_sd import compile_checkpoint
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/diff_to_sd.py", line 256, in <module>
    def convert_text_enc_state_dict(text_enc_dict: dict[str, torch.Tensor]):
TypeError: 'type' object is not subscriptable

So similar error, so I'm guessing same with line 256 as well.

Change from:

def convert_text_enc_state_dict(text_enc_dict: dict[str, torch.Tensor]):

to

def convert_text_enc_state_dict(text_enc_dict):
ekeric13 commented 1 year ago

@d8ahazard thanks for the response!

Re-ran the colab and i magically do not see the ModuleNotFoundError: No module named 'launch' error. Instead I got the errors OP mentioned. Very frustrating.

First error fixed by tweak 1 in @mediocreatmybest 's notebook (installing extra pip modules)

from subprocess import getoutput

!pip install git+https://github.com/huggingface/diffusers
!pip install git+https://github.com/huggingface/transformers
!pip install git+https://github.com/huggingface/accelerate

Second error fixed by updating the diff_to_sd.py file to avoid the type hints as @roperi and you mentioned.

Screenshot 2022-12-19 at 7 28 08 PM

going to try and train a new module later but the UI looks to be working

mediocreatmybest commented 1 year ago

Just did an update to diffusers and transformers with: !pip install diffusers==0.10.2 transformers==4.25.1

Changed those two lines above as mentioned.

Looks like those two lines fixed the original colab with the extension.

in diff_to_sd.py Line 214 to:

def convert_text_enc_state_dict_v20(text_enc_dict):

Line 256 to:

def convert_text_enc_state_dict(text_enc_dict):

in train_dreambooth.py Line 359 and 360 to:

def main(args: DreamboothConfig, memory_record, use_subdir, lora_model=None, lora_alpha=1):

dream

mediocreatmybest commented 1 year ago

Just did an update to diffusers and transformers with: !pip install diffusers==0.10.2 transformers==4.25.1

Changed those two lines above as mentioned.

Looks like those two lines fixed the original colab with the extension.

Line 214 to:

def convert_text_enc_state_dict_v20(text_enc_dict):

Line 256 to:

def convert_text_enc_state_dict(text_enc_dict):

dream

Figured I should do a quick test run to see if any other errors.

Initializing dreambooth training...
Patching transformers to fix kwargs errors.
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/dreambooth.py", line 361, in start_training
    from extensions.sd_dreambooth_extension.dreambooth.train_dreambooth import main
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 359, in <module>
    def main(args: DreamboothConfig, memory_record, use_subdir, lora_model=None, lora_alpha=1.0, lora_txt_alpha=1.0, custom_model_name="") -> tuple[
TypeError: 'type' object is not subscriptable
 Training completed, reloading SD Model. 

This is with LORA Off, SD2.1 x768, Xformers, FP16, 8bitAdam enabled

d8ahazard commented 1 year ago

Delete they typehint at 359 as well (->tuple....)

On Mon, Dec 19, 2022 at 9:41 PM mediocreatmybest @.***> wrote:

Just did an update to diffusers and transformers with: !pip install diffusers==0.10.2 transformers==4.25.1

Changed those two lines above as mentioned.

Looks like those two lines fixed the original colab with the extension.

Line 214 to:

def convert_text_enc_state_dict_v20(text_enc_dict):

Line 256 to:

def convert_text_enc_state_dict(text_enc_dict):

[image: dream] https://user-images.githubusercontent.com/80406625/208576918-8f9b7e3a-4914-41bf-a9ae-65438f213f20.png

Figured I should do a quick test run to see if any other errors.

Initializing dreambooth training... Patching transformers to fix kwargs errors. Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/dreambooth.py", line 361, in start_training from extensions.sd_dreambooth_extension.dreambooth.train_dreambooth import main File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 359, in def main(args: DreamboothConfig, memory_record, use_subdir, lora_model=None, lora_alpha=1.0, lora_txt_alpha=1.0, custom_model_name="") -> tuple[ TypeError: 'type' object is not subscriptable Training completed, reloading SD Model.

This is with LORA Off, SD2.1 x768, Xformers, FP16, 8bitAdam enabled

— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/914#issuecomment-1358797093, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMO4NHPFFPSUDF5346WOD3WOETFBANCNFSM6AAAAAASXLDMKI . You are receiving this because you were mentioned.Message ID: @.***>

mediocreatmybest commented 1 year ago

As mentioned further up the post that I missed as well:

I'm updating the post above with this as well.

Delete they typehint at 359 as well (->tuple....) On Mon, Dec 19, 2022 at 9:41 PM mediocreatmybest @.> wrote: Just did an update to diffusers and transformers with: !pip install diffusers==0.10.2 transformers==4.25.1 Changed those two lines above as mentioned. Looks like those two lines fixed the original colab with the extension. Line 214 to: def convert_text_enc_state_dict_v20(text_enc_dict): Line 256 to: def convert_text_enc_state_dict(text_enc_dict): [image: dream] https://user-images.githubusercontent.com/80406625/208576918-8f9b7e3a-4914-41bf-a9ae-65438f213f20.png Figured I should do a quick test run to see if any other errors. Initializing dreambooth training... Patching transformers to fix kwargs errors. Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/dreambooth.py", line 361, in start_training from extensions.sd_dreambooth_extension.dreambooth.train_dreambooth import main File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 359, in def main(args: DreamboothConfig, memory_record, use_subdir, lora_model=None, lora_alpha=1.0, lora_txt_alpha=1.0, custom_model_name="") -> tuple[ TypeError: 'type' object is not subscriptable Training completed, reloading SD Model. This is with LORA Off, SD2.1 x768, Xformers, FP16, 8bitAdam enabled — Reply to this email directly, view it on GitHub <#914 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMO4NHPFFPSUDF5346WOD3WOETFBANCNFSM6AAAAAASXLDMKI . You are receiving this because you were mentioned.Message ID: @.>

Thanks, That fixed that error as well. Python isn't my strong area, only just started looking at the basics a few weeks ago. Looks like this just came up as well.

Initializing dreambooth training...
Patching transformers to fix kwargs errors.
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/dreambooth.py", line 362, in start_training
    config, mem_record, msg = main(config, mem_record, use_subdir=use_subdir, lora_model=lora_model_name,
TypeError: main() got an unexpected keyword argument 'lora_txt_alpha'
 Training completed, reloading SD Model. 
 Allocated: 0.0GB 
 Reserved: 0.0GB 

Memory output: {}
 Restored system models. 
 Allocated: 2.6GB 
 Reserved: 2.7GB 

Returning result: Exception training model: main() got an unexpected keyword argument 'lora_txt_alpha'
mediocreatmybest commented 1 year ago

Updated the colab again with those changes based. Hopefully helps someone :)

https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/Scripts/scraps/StableDiffusion-misc/tweaked_fast_stable_diffusion_AUTOMATIC1111.ipynb

ekeric13 commented 1 year ago

@mediocreatmybest I am actually testing if the dreambooth modelling works now.

I did not get the lora error but I also avoided checking it as a paramter:

Screenshot 2022-12-19 at 8 40 17 PM

So far no errors but only 5% done

edit: finished with no errors

artificialguybr commented 1 year ago

@mediocreatmybest I am actually testing if the dreambooth modelling works now.

I did not get the lora error but I also avoided checking it as a paramter:

Screenshot 2022-12-19 at 8 40 17 PM

So far no errors but only 5% done

From what I understand LORA is only working in 2.0 and not in 2.1

artificialguybr commented 1 year ago

Updated the colab again with those changes based. Hopefully helps someone :)

https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/Scripts/scraps/StableDiffusion-misc/tweaked_fast_stable_diffusion_AUTOMATIC1111.ipynb

@TheLastBen What do you think about doing these modifications optionally in the official Colab?

mediocreatmybest commented 1 year ago

Updated the colab again with those changes based. Hopefully helps someone :) https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/Scripts/scraps/StableDiffusion-misc/tweaked_fast_stable_diffusion_AUTOMATIC1111.ipynb

@TheLastBen What do you think about doing these modifications optionally in the official Colab?

As the official version pulls down a whole dependency environment, I'd suspect it would be something he would need to update on that, though personally I'd love to see little tweak section that has optional fixes for specific situations.

Even adding something like I have similar in another colab I've done that updates all the extensions within the colab, so much faster doing it that way than trying to get the Web-ui to do it.

#@markdown  - Check to update Automatic1111 extensions
update_extensions = True #@param {type:"boolean"} 

if update_extensions is True:
  %cd /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions
  !touch '/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/put extensions here.txt'
  !rm '/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/put extensions here.txt'
  for dirname in os.listdir('/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions'):
      gitpull = ['git', 'pull']
      result = subprocess.Popen(gitpull, cwd=f'{dirname}')
  !touch '/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/put extensions here.txt'

As an example. Thanks everyone for all the help :)

mediocreatmybest commented 1 year ago

@mediocreatmybest I am actually testing if the dreambooth modelling works now. I did not get the lora error but I also avoided checking it as a paramter:

Screenshot 2022-12-19 at 8 40 17 PM

So far no errors but only 5% done

From what I understand LORA is only working in 2.0 and not in 2.1

Yeah I think it is an SD2.1, I can't confirm with 2.0 as I haven't got that model downloaded or installed. I'm starting to run out of computing units as well. I think it may be partially related to xformers as well? or at least in the way the extension does it? I'm only guessing at this point.

roperi commented 1 year ago

@mediocreatmybest

As you mentioned it's better to use: !pip install diffusers==0.10.2 transformers==4.25.1

than to update to the newer versions... !pip pip install -U diffusers transformers

..because the latest diffusers version (as of today I guess) breaks Dreambooth training with :

  0% 0/120 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/dreambooth.py", line 362, in start_training
    config, mem_record, msg = main(config, mem_record, use_subdir=use_subdir, lora_model=lora_model_name,
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 469, in main
    concept_images = concept_pipeline(example["prompt"], num_inference_steps=concept.class_infer_steps,
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 529, in __call__
    noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_condition.py", line 424, in forward
    sample, res_samples = downsample_block(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_blocks.py", line 777, in forward
    hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py", line 216, in forward
    hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, timestep=timestep)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py", line 490, in forward
    hidden_states = self.attn1(norm_hidden_states, attention_mask=attention_mask) + hidden_states
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
TypeError: forward_default() got an unexpected keyword argument 'attention_mask'
markojak commented 1 year ago

I'm having very similar issues to @jvkap trying to get the Dreambooth extension working on Google Colab api.py throwing a TypeError: 'type' object is not subscriptable

The strange thing is that I've made many of the recommended changes including

Is anyone else experiencing this? Referencing also https://github.com/d8ahazard/sd_dreambooth_extension/issues/593 which is a similar issue

Here's my notebook: https://github.com/markojak/sd-dreambooth-colab/blob/main/SD-Dreambooth-AUTOMATIC1111.ipynb

Note that I've made the Tweaks as recommended by @mediocreatmybest which are quite nice because I don't have to edit the diff_to_sd.py manually So thanks for this!

Error I'm getting

Error loading script: api.py
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 195, in load_scripts
    module = script_loading.load_module(scriptfile.path)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
    exec(compiled, module.__dict__)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/api.py", line 24, in <module>
    from extensions.sd_dreambooth_extension.dreambooth.finetune_utils import FilenameTextGetter
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/finetune_utils.py", line 257, in <module>
    class ImageBuilder:
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/finetune_utils.py", line 330, in ImageBuilder
    def generate_images(self, prompt_data: list[PromptData]) -> [Image]:
TypeError: 'type' object is not subscriptable
roperi commented 1 year ago

@markojak

Edit file /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/finetune_utils.py

In line 330

Change def generate_images(self, prompt_data: list[PromptData]) -> [Image]:

To def generate_images(self, prompt_data):

That will remove the TypeError.

mediocreatmybest commented 1 year ago

I'm having very similar issues to @jvkap trying to get the Dreambooth extension working on Google Colab api.py throwing a TypeError: 'type' object is not subscriptable

The strange thing is that I've made many of the recommended changes including

* Edits to `diff_to_sd.py` removing the offending `: dict[str, torch.Tensor]` from the definitions

* Installed diffusers and transformers also changed this from latest to `!pip install diffusers==0.10.2 transformers==4.25.1`

Is anyone else experiencing this? Referencing also d8ahazard/sd_dreambooth_extension#593 which is a similar issue

Here's my notebook: https://github.com/markojak/sd-dreambooth-colab/blob/main/SD-Dreambooth-AUTOMATIC1111.ipynb

Note that I've made the Tweaks as recommended by @mediocreatmybest which are quite nice because I don't have to edit the diff_to_sd.py manually So thanks for this!

Error I'm getting

Error loading script: api.py
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 195, in load_scripts
    module = script_loading.load_module(scriptfile.path)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
    exec(compiled, module.__dict__)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/api.py", line 24, in <module>
    from extensions.sd_dreambooth_extension.dreambooth.finetune_utils import FilenameTextGetter
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/finetune_utils.py", line 257, in <module>
    class ImageBuilder:
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/finetune_utils.py", line 330, in ImageBuilder
    def generate_images(self, prompt_data: list[PromptData]) -> [Image]:
TypeError: 'type' object is not subscriptable

I haven't had a chance to check if any of the tweaks I did work 100% for the current versions, looks like quite a bit has been updated on different extensions as well. I'll try have a poke around and test it out. I'd run out of computing credits on colab due to all the testing previously, hopefully I've been topped up. Nothing more frustrating than walking out of the room to come back to a timed out session :\