NUROISEA / anime-webui-colab

webui on colab for weebs lol
112 stars 9 forks source link

VAE is not working, what happened? #31

Closed Ayaya70 closed 1 year ago

Ayaya70 commented 1 year ago

I tried for almost 4 hours trying to solve VAE problem here, but it's useless. Always when I upload a VAE with PYOM* WebUI Colab it's said error couldn't load or it's said it has two PCs connected or something. And no, I'm not using twice. It works normal, but always when I'm putting a VAE it happens.

Traceback (most recent call last):
  File "/content/stable-diffusion-webui/modules/call_queue.py", line 55, in f
    res = list(func(*args, **kwargs))
  File "/content/stable-diffusion-webui/modules/call_queue.py", line 35, in f
    res = func(*args, **kwargs)
  File "/content/stable-diffusion-webui/modules/txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "/content/stable-diffusion-webui/modules/processing.py", line 620, in process_images
    res = process_images_inner(p)
  File "/content/stable-diffusion-webui/modules/processing.py", line 729, in process_images_inner
    p.setup_conds()
  File "/content/stable-diffusion-webui/modules/processing.py", line 1126, in setup_conds
    super().setup_conds()
  File "/content/stable-diffusion-webui/modules/processing.py", line 346, in setup_conds
    self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
  File "/content/stable-diffusion-webui/modules/processing.py", line 338, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "/content/stable-diffusion-webui/modules/prompt_parser.py", line 143, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 665, in get_learned_conditioning
    c = self.cond_stage_model.encode(c)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 135, in encode
    return self(text)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 125, in forward
    outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer == "hidden")
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
    return self.text_model(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 708, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 223, in forward
    inputs_embeds = self.token_embedding(input_ids)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
NUROISEA commented 1 year ago

Can you provide the link for the VAE and model you used?

Ayaya70 commented 1 year ago

I used Hassaku model. The link is https://huggingface.co/nolanaatama/hssk/resolve/main/hssk.safetensors

I used the VAE present in the google colab, https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt but normally I just upload the VAE I want to in batchlinks downloader, and never had this problem before

Stable diffusion model failed to load Loading weights [None] from /content/models/hssk.safetensors Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. loading stable diffusion model: AssertionError Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/content/stable-diffusion-webui/modules/ui.py", line 1515, in update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit") File "/content/stable-diffusion-webui/modules/shared.py", line 754, in sd_model return modules.sd_models.model_data.get_sd_model() File "/content/stable-diffusion-webui/modules/sd_models.py", line 439, in get_sd_model load_model() File "/content/stable-diffusion-webui/modules/sd_models.py", line 510, in load_model load_model_weights(sd_model, checkpoint_info, state_dict, timer) File "/content/stable-diffusion-webui/modules/sd_models.py", line 350, in load_model_weights sd_vae.load_vae(model, vae_file, vae_source) File "/content/stable-diffusion-webui/modules/sd_vae.py", line 138, in load_vae assert os.path.isfile(vae_file), f"VAE {vae_source} doesn't exist: {vae_file}" AssertionError: VAE from commandline argument doesn't exist: /content/VAE/https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt

Stable diffusion model failed to load Loading weights [None] from /content/models/hssk.safetensors Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params

NUROISEA commented 1 year ago

AssertionError: VAE from commandline argument doesn't exist: /content/VAE/https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt

You don't need to place the entire link, you only need the file name, hence that error

Image of what to put in the VAE loader ![image](https://github.com/NUROISEA/anime-webui-colab/assets/120075289/5edee0eb-cd4f-4d6d-9cbc-c185c8018a5f)

You only need to place the link in the first cell :)

As shown in this image ![image](https://github.com/NUROISEA/anime-webui-colab/assets/120075289/5b379e62-9437-424f-8dcc-fb8aab4202d0)
NUROISEA commented 1 year ago

I tested both of the model links you provided, and they seem to work just fine. image


These are my cell inputs for reference:

First cell: ⏬ Download models ![image](https://github.com/NUROISEA/anime-webui-colab/assets/120075289/8c5cb1b3-109d-4940-bdae-fe97e4b5b1ec)
Second cell: 🚀 Launch web UI ![image](https://github.com/NUROISEA/anime-webui-colab/assets/120075289/eff4811a-09e5-42ee-b38b-3e8f49d40882) ![image](https://github.com/NUROISEA/anime-webui-colab/assets/120075289/6dd0ac97-0834-4e99-a3ee-47714efc6934)

Closing this issue for now, feel free to open it again if something is really broken. And if so, please provide all of your inputs in the cells so we can debug this easier, thanks! :)

Ayaya70 commented 1 year ago

Thanks so much, I can't believe it was something so simple this entire time... The bizarre is that I always did that and no problems happened before, oh my. One last question, did you test the batchlink downloader? I don't want to risk because I already spent hours on this but just wanna confirm if VAEs uploaded there still works

NUROISEA commented 1 year ago

Yep, Batchlinks works as expected!