XavierXiao / Dreambooth-Stable-Diffusion

Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
MIT License
7.56k stars 789 forks source link

OSError: Can't load tokenizer for "directory" #93

Open xMatijaKx opened 1 year ago

xMatijaKx commented 1 year ago

Hello, When I click on train, nothing happens. I get an error in console saying:

OSError: Can't load tokenizer for 'D:\AI_Image\Stable_diffusion\stable-diffusion-webui\models\dreambooth\testtrainmodel\working'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'D:\AI_Image\Stable_diffusion\stable-diffusion-webui\models\dreambooth\testtrainmodel\working' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Any help?

McFex commented 1 year ago

Just bumped into exactly the same error. Tried everything obvious like reloading, deleting spaces in file and directory names. My console tells me already at the creation of the model that there is an exception of some kind. I'll just post the whole thing: (The two lines saying "Error loading params" are from hitting the new Wizard tab two times)

Creating dreambooth model folder: flxdrmbth
Exception with the conversion: 'Namespace' object has no attribute 'ckptfix'
Traceback (most recent call last):
  File "C:\stable-diffusion\stable-diffusion-webui-master\extensions\sd_dreambooth_extension\dreambooth\conversion.py", line 847, in extract_checkpoint
    if shared.cmd_opts.ckptfix or shared.cmd_opts.medvram or shared.cmd_opts.lowvram:
AttributeError: 'Namespace' object has no attribute 'ckptfix'
 Extraction completed.
 Allocated: 2.0GB
 Reserved: 2.0GB

Error loading params.
Error loading params.
Starting Dreambooth training...
 VRAM cleared.
 Allocated: 0.0GB
 Reserved: 0.0GB

Replace CrossAttention.forward to use default
 Cleanup completed.
 Allocated: 0.0GB
 Reserved: 0.0GB

Error completing request
Arguments: ('flxdrmbth', False, False, '', 'D:\\PICTURES RAW\\stable_diffusion_training\\Felix\\trainingPNG', 'D:\\PICTURES RAW\\stable_diffusion_training\\regularization_images\\Stable-Diffusion-Regularization-Images\\person_ddim', 'photo of a flxdrmbth person', 'photo of a person', 'Description', '', '', '', '', 1, 0, 7.5, 40, 100, 512, False, True, 1, 1, 1, 1111, 1, True, 1.72e-06, False, 'constant', 0, 'default', True, 0.9, 0.999, 0.01, 1e-08, 1, 5000, 5000, 'fp16', True, '', False, True, '75', True, False, '', 7.5, 40, False) {}
Traceback (most recent call last):
  File "C:\stable-diffusion\stable-diffusion-webui-master\modules\ui.py", line 185, in f
    res = list(func(*args, **kwargs))
  File "C:\stable-diffusion\stable-diffusion-webui-master\webui.py", line 54, in f
    res = func(*args, **kwargs)
  File "C:\stable-diffusion\stable-diffusion-webui-master\extensions\sd_dreambooth_extension\dreambooth\dreambooth.py", line 527, in start_training
    config, mem_record = main(config, mem_record)
  File "C:\stable-diffusion\stable-diffusion-webui-master\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 717, in main
    tokenizer = CLIPTokenizer.from_pretrained(
  File "C:\stable-diffusion\stable-diffusion-webui-master\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1789, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'C:\stable-diffusion\stable-diffusion-webui-master\models\dreambooth\flxdrmbth\working'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'C:\stable-diffusion\stable-diffusion-webui-master\models\dreambooth\flxdrmbth\working' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Did you also have this exception, when creating the empty model?

xMatijaKx commented 1 year ago

Yes. Same error when creating the model. I tried with multiple models, everytime the same thing,

Also, don't know if this is relevant information, but my /models/dreambooth/modelname/working directory is empty. I don't know if it's supposed to be like that or not. I also tried manually putting models (named "model.ckpt") in that directory but nothing worked (this was randomly guessing that a model needs to be there)

BrahRah commented 1 year ago

I have the same issue with automatic 1111's version of dreambooth:

Replace` CrossAttention.forward to use default
Error completing request
Arguments: ('asaricustom1ddim', False, False, '', 'D:\\AIs\\asari 512px', 'D:\\AIs\\REGULARIZATION-IMAGES-SD-main\\person', 'photos of the asaricust1ddim race from the game mass effect', 'photos of a humanoid alien race', 'Description', '', '', '', '', 1, 0, 7.5, 40, 0, 512, False, True, 1, 1, 1, 1200, 1, True, 1e-06, False, 'constant', 0, 'default', True, 0.9, 0.999, 0.01, 1e-08, 1, 5000, 5000, 'no', True, '', False, True, '75', True, False, '', 7.5, 40, False) {}
Traceback (most recent call last):
  File "D:\AIs\stable-diffusion-webui\modules\ui.py", line 185, in f
    res = list(func(*args, **kwargs))
  File "D:\AIs\stable-diffusion-webui\webui.py", line 57, in f
    res = func(*args, **kwargs)
  File "D:\AIs\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\dreambooth.py", line 527, in start_training
    config, mem_record = main(config, mem_record)
  File "D:\AIs\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 717, in main
    tokenizer = CLIPTokenizer.from_pretrained(
  File "D:\AIs\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1789, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'D:\AIs\stable-diffusion-webui\models\dreambooth\asaricustom1ddim\working'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'D:\AIs\stable-diffusion-webui\models\dreambooth\asaricustom1ddim\working' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer`

`

sqzhang-jeremy commented 1 year ago

me, too. Anybody help?

PyroFD3S commented 1 year ago

Using auto1111, when trying to create models

Converting text encoder... Exception setting up output: Can't load tokenizer for 'stabilityai/stable-diffusion-2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer. Traceback (most recent call last): File "E:\sd\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\sd_to_diff.py", line 935, in extract_checkpoint tokenizer = CLIPTokenizer.from_pretrained("stabilityai/stable-diffusion-2", subfolder="tokenizer") File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1785, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'stabilityai/stable-diffusion-2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer. Pipeline or config is not set, unable to continue. Can't load config! Traceback (most recent call last): File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1016, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "E:\sd\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 945, in postprocess_data if predictions[i] is components._Keywords.FINISHED_ITERATING: IndexError: tuple index out of range

CloakerJosh commented 1 year ago

I have a very similar issue, same as PyroFD3S I believe - happens when trying to create a model for the DreamBooth extension in Automatic1111 installation:

Converting text encoder... Exception setting up output: Can't load tokenizer for 'stabilityai/stable-diffusion-2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer. Traceback (most recent call last): File "C:\Users\CloakerJosh\stable-diffusion-webui-master\extensions\sd_dreambooth_extension\dreambooth\sd_to_diff.py", line 935, in extract_checkpoint tokenizer = CLIPTokenizer.from_pretrained("stabilityai/stable-diffusion-2", subfolder="tokenizer") File "C:\Users\CloakerJosh\stable-diffusion-webui-master\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1785, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'stabilityai/stable-diffusion-2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer. Pipeline or config is not set, unable to continue.

Would love to be able to resolve this one, if anyone has any insight?