AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
135.05k stars 25.8k forks source link

Failed to create model quickly #13551

Closed FurkanGozukara closed 8 months ago

FurkanGozukara commented 8 months ago

I am trying to load SDXL on a Kaggle notebook

13 GB RAM there are

But it still fails I don't get why

Startup time: 235.8s (prepare environment: 226.2s, import torch: 3.0s, import gradio: 0.8s, setup paths: 2.7s, initialize shared: 0.4s, other imports: 1.1s, setup codeformer: 0.1s, list SD models: 0.2s, load scripts: 0.7s, create ui: 0.5s, gradio launch: 0.1s).
Calculating sha256 for /kaggle/temp/models/sd_xl_base_1.0.safetensors: 31e35c80fc4829d14f90153f4c74cd59c90b779f6afe05a74cd6120b893f7e5b
31e35c80fc4829d14f90153f4c74cd59c90b779f6afe05a74cd6120b893f7e5bLoading weights [31e35c80fc] from /kaggle/temp/models/sd_xl_base_1.0.safetensors

Loading weights [31e35c80fc] from /kaggle/temp/models/sd_xl_base_1.0.safetensors
Creating model from config: /kaggle/working/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Creating model from config: /kaggle/working/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Downloading (…)olve/main/vocab.json: 100%|███| 961k/961k [00:00<00:00, 15.8MB/s]
Downloading (…)olve/main/merges.txt: 100%|███| 525k/525k [00:00<00:00, 25.0MB/s]
Downloading (…)cial_tokens_map.json: 100%|█████| 389/389 [00:00<00:00, 2.19MB/s]
Downloading (…)okenizer_config.json: 100%|█████| 905/905 [00:00<00:00, 5.25MB/s]
creating model quickly: TypeError
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/opt/conda/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/kaggle/working/stable-diffusion-webui/modules/initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "/kaggle/working/stable-diffusion-webui/modules/shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/kaggle/working/stable-diffusion-webui/modules/sd_models.py", line 499, in get_sd_model
    load_model()
  File "/kaggle/working/stable-diffusion-webui/modules/sd_models.py", line 602, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/kaggle/working/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/kaggle/working/stable-diffusion-webui/repositories/generative-models/sgm/models/diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "/kaggle/working/stable-diffusion-webui/repositories/generative-models/sgm/util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/kaggle/working/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "/kaggle/working/stable-diffusion-webui/repositories/generative-models/sgm/util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/kaggle/working/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "/kaggle/working/stable-diffusion-webui/modules/sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "/kaggle/working/stable-diffusion-webui/modules/sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
TypeError: transformers.modeling_utils.PreTrainedModel.from_pretrained() got multiple values for keyword argument 'config'

Failed to create model quickly; will retry using slow method.
Downloading (…)lve/main/config.json: 100%|█| 4.52k/4.52k [00:00<00:00, 13.4MB/s]
Downloading model.safetensors: 100%|████████| 1.71G/1.71G [00:09<00:00, 181MB/s]
Downloading (…)ip_pytorch_model.bin: 100%|██| 10.2G/10.2G [00:59<00:00, 172MB/s]
FurkanGozukara commented 8 months ago

i found the reason and updated my notebook : https://twitter.com/GozukaraFurkan/status/1711134402380496953

jake-nz commented 8 months ago

i found the reason and updated my notebook

What did you find the reason to be?

FurkanGozukara commented 8 months ago

i found the reason and updated my notebook

What did you find the reason to be?

it turns out their temp disk is currently broken therefore super slow

i don't use it anymore

FurkanGozukara commented 8 months ago

by the way

Kaggle just upgraded free Notebook resources. A huge upgrade.

From 13 GB RAM to 29 GB RAM. 2 CPU to 4 CPUs.

Now our special Stable Diffusion Automatic1111 SD Web UI notebook is working blazing fast.

SDXL 1024x1024 test

https://twitter.com/GozukaraFurkan/status/1714957139851063743

https://www.patreon.com/posts/run-on-free-like-88714330