Closed dannyvfilms closed 1 year ago
Same exact error with dreamshaper 8 inpainting model
Here from google, same issue. Dreamshaper 4 inpainting will not load, standard SD1-5 inpainting works. Windows 10 128GB RAM RTX 3090
Also on MacOS, been having this same issue since 3.0.1
All inpainting models except standard are failing for me.
sorry to leave a "same here" comment. New to Invoke as of today and can confirm this issue persists even with a fresh and clean installation. only the OG stable diffusion inpainting model works, no downloaded ones work.
I did try converting to diffusers, as well as re-downloading the config yaml for inpainting and linking it. As well as re-downloading and tagging the checkpoints as inpainting as well as standard checkpoints.
Nothing.
Hope they fix it, the whole reason I moved from A1111 was for the unified canvas.
Many inpainting checkpoint models are not loading properly in 3.0.1. The fix is in the main
branch of the code and will be rolled out in 3.0.2 (very soon now).
@lstein
I upgraded to main branch using option 9 in the CLI and I'm still getting the error with Dreamshaper 8 inpainting model.
Thank you for your efforts on this though!
I'm seeing the same as @nbs -- upgraded to main and it's still broken.
same here. the only model that matters aka the inpaint model on the unified canvas on invokeai and its broken.
Updated to Main today (3.0.2a1) and inpainting is resolved for me. I'm inclined to close the ticket unless others have the same issue when pulling from main.
@dannyvfilms Which model worked for you? I'm still getting it.
cyberrealistic_v32-inpainting. I didn't extensively try all of them, so I don't have logs to share at the moment.
CyberRealistic inpainting does seem to work for me, but Deliberate and Deliberate both do not. @dannyvfilms
Just updated to main and tried again, still getting an error. Here's the entire output in the terminal.
[2023-08-07 07:39:16,329]::[InvokeAI]::INFO --> Loading model D:\StableDiffusion\InvokeAI\models\.cache\3b6561f4b4c18bec0fccf5d067e11ae5, type sd-1:main:text_encoder
[2023-08-07 07:39:29,686]::[InvokeAI]::INFO --> Loading model D:\StableDiffusion\InvokeAI\models\sd-1\embedding\FastNegativeV2.pt, type sd-1:embedding
[2023-08-07 07:39:32,380]::[InvokeAI]::INFO --> Loading model D:\StableDiffusion\InvokeAI\models\.cache\3b6561f4b4c18bec0fccf5d067e11ae5, type sd-1:main:unet
====ERR LOAD====
None: Cannot load D:\StableDiffusion\InvokeAI\models\.cache\3b6561f4b4c18bec0fccf5d067e11ae5 because conv_in.weight expected shape tensor(..., device='meta', size=(320, 4, 3, 3)), but got torch.Size([320, 9, 3, 3]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
[2023-08-07 07:39:34,449]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "D:\StableDiffusion\InvokeAI\.venv\lib\site-packages\invokeai\app\services\processor.py", line 90, in __process
outputs = invocation.invoke(
File "D:\StableDiffusion\InvokeAI\.venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\StableDiffusion\InvokeAI\.venv\lib\site-packages\invokeai\app\invocations\latent.py", line 428, in invoke
unet_info = context.services.model_manager.get_model(
File "D:\StableDiffusion\InvokeAI\.venv\lib\site-packages\invokeai\app\services\model_manager_service.py", line 365, in get_model
model_info = self.mgr.get_model(
File "D:\StableDiffusion\InvokeAI\.venv\lib\site-packages\invokeai\backend\model_management\model_manager.py", line 491, in get_model
model_context = self.cache.get_model(
File "D:\StableDiffusion\InvokeAI\.venv\lib\site-packages\invokeai\backend\model_management\model_cache.py", line 198, in get_model
model = model_info.get_model(child_type=submodel, torch_dtype=self.precision)
File "D:\StableDiffusion\InvokeAI\.venv\lib\site-packages\invokeai\backend\model_management\models\base.py", line 300, in get_model
raise Exception(f"Failed to load {self.base_model}:{self.model_type}:{child_type} model")
Exception: Failed to load sd-1:main:unet model
[2023-08-07 07:39:34,633]::[InvokeAI]::ERROR --> Error while invoking:
Failed to load sd-1:main:unet model```
Same problem, running InvokeAI 3.01 (hotfix 3). Running Pop!_OS (Ubuntu base). Do not have any issues running Automic1111. Although I really prefer InvokeAI.
Updated to 3.0.2rc1 and the issue appears to persist with epicrealism_pureEvolutionV5-inpainting.safetensors
[2023-08-09 16:44:23,930]::[uvicorn.access]::INFO --> 127.0.0.1:59009 - "POST /api/v1/images/upload?image_category=mask&is_intermediate=true HTTP/1.1" 201
[2023-08-09 16:44:23,940]::[uvicorn.access]::INFO --> 127.0.0.1:59009 - "POST /api/v1/sessions/ HTTP/1.1" 200
[2023-08-09 16:44:23,966]::[uvicorn.access]::INFO --> 127.0.0.1:59009 - "PUT /api/v1/sessions/04c8e416-03c7-40cc-a8f5-8483bb40522b/invoke?all=true HTTP/1.1" 202
[2023-08-09 16:44:23,971]::[uvicorn.access]::INFO --> 127.0.0.1:59010 - "PATCH /api/v1/images/i/c765c45e-097c-4b3e-8209-27890a1de6f9.png HTTP/1.1" 200
[2023-08-09 16:44:23,974]::[uvicorn.access]::INFO --> 127.0.0.1:59011 - "PATCH /api/v1/images/i/d9e31a2c-7e6a-41ff-a159-f6cd60cc8e64.png HTTP/1.1" 200
[2023-08-09 16:44:23,998]::[uvicorn.access]::INFO --> 127.0.0.1:59009 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200
[2023-08-09 16:44:24,055]::[uvicorn.access]::INFO --> 127.0.0.1:59010 - "GET /api/v1/images/?board_id=none&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200
[2023-08-09 16:44:24,785]::[InvokeAI]::INFO --> Loading model /Applications/InvokeAI/models/.cache/6161a84c53c26ce2e24a1544860280bb, type sd-1:main:unet
====ERR LOAD====
None: Cannot load /Applications/InvokeAI/models/.cache/6161a84c53c26ce2e24a1544860280bb because conv_in.weight expected shape tensor(..., device='meta', size=(320, 4, 3, 3)), but got torch.Size([320, 9, 3, 3]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
[2023-08-09 16:44:25,377]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/services/processor.py", line 90, in __process
outputs = invocation.invoke(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 220, in invoke
with self.load_model_old_way(context, scheduler) as model:
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 117, in __enter__
return next(self.gen)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 174, in load_model_old_way
unet_info = context.services.model_manager.get_model(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/services/model_manager_service.py", line 365, in get_model
model_info = self.mgr.get_model(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/model_management/model_manager.py", line 491, in get_model
model_context = self.cache.get_model(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/model_management/model_cache.py", line 198, in get_model
model = model_info.get_model(child_type=submodel, torch_dtype=self.precision)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/model_management/models/base.py", line 300, in get_model
raise Exception(f"Failed to load {self.base_model}:{self.model_type}:{child_type} model")
Exception: Failed to load sd-1:main:unet model
[2023-08-09 16:44:25,378]::[InvokeAI]::ERROR --> Error while invoking:
Failed to load sd-1:main:unet model```
@dannyvfilms @nbs @nelsonre @BakonGuy @egoegoegoegoego could y'all try re-adding the model that was giving you difficulty and seeing if that fixes the issue?
@Quildar could you try updating to 3.0.2rc1 and see if the issue persists?
@Millu re-adding the model generated a different issue. Changing the VAE from sd-vae-ft-mse
to Default
in combination with re-adding the model resolved the issue.
[2023-08-09 19:12:31,087]::[InvokeAI]::INFO --> Converting /Applications/InvokeAI/autoimport/main/epicrealism_pureEvolutionV5-inpainting.safetensors to diffusers format
[2023-08-09 19:12:42,220]::[InvokeAI]::INFO --> Loading model /Applications/InvokeAI/models/.cache/6161a84c53c26ce2e24a1544860280bb, type sd-1:main:tokenizer
[2023-08-09 19:12:42,754]::[InvokeAI]::INFO --> Loading model /Applications/InvokeAI/models/.cache/6161a84c53c26ce2e24a1544860280bb, type sd-1:main:text_encoder
[2023-08-09 19:12:54,358]::[InvokeAI]::INFO --> Loading model /Applications/InvokeAI/models/.cache/6161a84c53c26ce2e24a1544860280bb, type sd-1:main:scheduler
[2023-08-09 19:12:54,571]::[InvokeAI]::INFO --> Loading model /Applications/InvokeAI/models/.cache/6161a84c53c26ce2e24a1544860280bb, type sd-1:main:unet
/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/configuration_utils.py:134: FutureWarning: Accessing config attribute `requires_safety_checker` directly via 'StableDiffusionGeneratorPipeline' object attribute is deprecated. Please access 'requires_safety_checker' over 'StableDiffusionGeneratorPipeline's config object instead, e.g. 'scheduler.config.requires_safety_checker'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
Generating: 0%| | 0/1 [00:00<?, ?it/s]You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Generating: 0%| | 0/1 [00:00<?, ?it/s]
[2023-08-09 19:13:15,918]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/services/processor.py", line 90, in __process
outputs = invocation.invoke(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 236, in invoke
generator_output = next(outputs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 144, in generate
results = generator.generate(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 328, in generate
image = make_image(x_T, seed)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/inpaint.py", line 292, in make_image
pipeline_output = pipeline.inpaint_from_embeddings(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 826, in inpaint_from_embeddings
init_image_latents = self.non_noised_latents_from_image(init_image, device=device, dtype=latents_dtype)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 886, in non_noised_latents_from_image
init_latent_dist = self.vae.encode(init_image).latent_dist
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/models/autoencoder_kl.py", line 242, in encode
h = self.encoder(x)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/models/vae.py", line 110, in forward
sample = self.conv_in(sample)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (c10::Half) and bias type (float) should be the same
[2023-08-09 19:13:15,922]::[InvokeAI]::ERROR --> Error while invoking:
Input type (c10::Half) and bias type (float) should be the same
[```
@dannyvfilms @nbs @nelsonre @BakonGuy @egoegoegoegoego could y'all try re-adding the model that was giving you difficulty and seeing if that fixes the issue?
@Quildar could you try updating to 3.0.2rc1 and see if the issue persists?
Ah that does fix it for me (Dreamshaper 8 Inpainting model). Thank you.
@dannyvfilms @nbs @nelsonre @BakonGuy @egoegoegoegoego could y'all try re-adding the model that was giving you difficulty and seeing if that fixes the issue?
@Quildar could you try updating to 3.0.2rc1 and see if the issue persists?
Updated and found the inpaint models still didn't work. After removing them from Invoke and redownloading them again, they work now.
Cheers
I tried a handful of in-painting models that were giving me issues before. With 3.0.2rc1, they load fine now.
EDIT: restarting Invoke AI resolved the issue, but something is still going on here that needs to be resolved.
The same model that was fixed for me last night was working, then stopped working without explanation. Changing the VAE and removing the LORA did not resolve the issue.
[2023-08-10 13:59:08,993]::[uvicorn.access]::INFO --> 127.0.0.1:50767 - "POST /api/v1/images/upload?image_category=mask&is_intermediate=true HTTP/1.1" 201
[2023-08-10 13:59:09,017]::[uvicorn.access]::INFO --> 127.0.0.1:50767 - "POST /api/v1/sessions/ HTTP/1.1" 200
[2023-08-10 13:59:09,056]::[uvicorn.access]::INFO --> 127.0.0.1:50767 - "PUT /api/v1/sessions/97b9e9d5-6954-4db4-bb10-e4ae5219d31d/invoke?all=true HTTP/1.1" 202
[2023-08-10 13:59:09,067]::[uvicorn.access]::INFO --> 127.0.0.1:50768 - "PATCH /api/v1/images/i/c5126e81-813f-4e80-825e-07106a030f42.png HTTP/1.1" 200
[2023-08-10 13:59:09,068]::[uvicorn.access]::INFO --> 127.0.0.1:50769 - "PATCH /api/v1/images/i/c8739c12-12fd-43f5-bfd2-84bc8fdeccc8.png HTTP/1.1" 200
/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/configuration_utils.py:134: FutureWarning: Accessing config attribute `requires_safety_checker` directly via 'StableDiffusionGeneratorPipeline' object attribute is deprecated. Please access 'requires_safety_checker' over 'StableDiffusionGeneratorPipeline's config object instead, e.g. 'scheduler.config.requires_safety_checker'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
Generating: 0%| | 0/1 [00:00<?, ?it/s]You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Generating: 0%| | 0/1 [00:00<?, ?it/s]
[2023-08-10 13:59:30,088]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/services/processor.py", line 90, in __process
outputs = invocation.invoke(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 236, in invoke
generator_output = next(outputs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 144, in generate
results = generator.generate(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 328, in generate
image = make_image(x_T, seed)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/inpaint.py", line 292, in make_image
pipeline_output = pipeline.inpaint_from_embeddings(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 826, in inpaint_from_embeddings
init_image_latents = self.non_noised_latents_from_image(init_image, device=device, dtype=latents_dtype)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 886, in non_noised_latents_from_image
init_latent_dist = self.vae.encode(init_image).latent_dist
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/models/autoencoder_kl.py", line 242, in encode
h = self.encoder(x)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/models/vae.py", line 110, in forward
sample = self.conv_in(sample)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (c10::Half) and bias type (float) should be the same
[2023-08-10 13:59:30,091]::[InvokeAI]::ERROR --> Error while invoking:
Input type (c10::Half) and bias type (float) should be the same
[2023-08-10 13:59:36,613]::[uvicorn.access]::INFO --> 127.0.0.1:50831 - "POST /api/v1/images/upload?image_category=general&is_intermediate=true HTTP/1.1" 201
[2023-08-10 13:59:36,688]::[uvicorn.access]::INFO --> 127.0.0.1:50831 - "POST /api/v1/images/upload?image_category=mask&is_intermediate=true HTTP/1.1" 201
[2023-08-10 13:59:36,737]::[uvicorn.access]::INFO --> 127.0.0.1:50831 - "POST /api/v1/sessions/ HTTP/1.1" 200
[2023-08-10 13:59:36,780]::[uvicorn.access]::INFO --> 127.0.0.1:50831 - "PUT /api/v1/sessions/9a0b3745-45ba-4dfd-9007-1d87759df961/invoke?all=true HTTP/1.1" 202
[2023-08-10 13:59:36,787]::[uvicorn.access]::INFO --> 127.0.0.1:50832 - "PATCH /api/v1/images/i/809f170b-f5c8-4b78-ba61-7045665c7d61.png HTTP/1.1" 200
[2023-08-10 13:59:36,789]::[uvicorn.access]::INFO --> 127.0.0.1:50833 - "PATCH /api/v1/images/i/be5a551a-ad7e-46e8-9405-a3bd6d442d47.png HTTP/1.1" 200
Generating: 0%| | 0/1 [00:00<?, ?it/s]You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Generating: 0%| | 0/1 [00:00<?, ?it/s]
[2023-08-10 13:59:42,968]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/services/processor.py", line 90, in __process
outputs = invocation.invoke(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 236, in invoke
generator_output = next(outputs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 144, in generate
results = generator.generate(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 328, in generate
image = make_image(x_T, seed)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/inpaint.py", line 292, in make_image
pipeline_output = pipeline.inpaint_from_embeddings(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 826, in inpaint_from_embeddings
init_image_latents = self.non_noised_latents_from_image(init_image, device=device, dtype=latents_dtype)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 886, in non_noised_latents_from_image
init_latent_dist = self.vae.encode(init_image).latent_dist
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/models/autoencoder_kl.py", line 242, in encode
h = self.encoder(x)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/models/vae.py", line 110, in forward
sample = self.conv_in(sample)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (c10::Half) and bias type (float) should be the same
[2023-08-10 13:59:42,971]::[InvokeAI]::ERROR --> Error while invoking:
Input type (c10::Half) and bias type (float) should be the same
[2023-08-10 14:01:43,804]::[uvicorn.access]::INFO --> 127.0.0.1:51024 - "POST /api/v1/images/upload?image_category=general&is_intermediate=true HTTP/1.1" 201
[2023-08-10 14:01:43,844]::[uvicorn.access]::INFO --> 127.0.0.1:51024 - "POST /api/v1/images/upload?image_category=mask&is_intermediate=true HTTP/1.1" 201
[2023-08-10 14:01:43,871]::[uvicorn.access]::INFO --> 127.0.0.1:51024 - "POST /api/v1/sessions/ HTTP/1.1" 200
[2023-08-10 14:01:43,912]::[uvicorn.access]::INFO --> 127.0.0.1:51024 - "PUT /api/v1/sessions/f9394f53-ec3d-43db-94b6-755b7826d858/invoke?all=true HTTP/1.1" 202
[2023-08-10 14:01:43,919]::[uvicorn.access]::INFO --> 127.0.0.1:51025 - "PATCH /api/v1/images/i/0cfb868d-402d-409b-bfe7-31dc5b29a405.png HTTP/1.1" 200
[2023-08-10 14:01:43,921]::[uvicorn.access]::INFO --> 127.0.0.1:51026 - "PATCH /api/v1/images/i/eb759e83-e9f8-4c37-8f86-62493f86d277.png HTTP/1.1" 200
Generating: 0%| | 0/1 [00:00<?, ?it/s]You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Generating: 0%| | 0/1 [00:00<?, ?it/s]
[2023-08-10 14:01:47,490]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/services/processor.py", line 90, in __process
outputs = invocation.invoke(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/app/invocations/generate.py", line 236, in invoke
generator_output = next(outputs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 144, in generate
results = generator.generate(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/base.py", line 328, in generate
image = make_image(x_T, seed)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/generator/inpaint.py", line 292, in make_image
pipeline_output = pipeline.inpaint_from_embeddings(
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 826, in inpaint_from_embeddings
init_image_latents = self.non_noised_latents_from_image(init_image, device=device, dtype=latents_dtype)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/invokeai/backend/stable_diffusion/diffusers_pipeline.py", line 886, in non_noised_latents_from_image
init_latent_dist = self.vae.encode(init_image).latent_dist
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/models/autoencoder_kl.py", line 242, in encode
h = self.encoder(x)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/diffusers/models/vae.py", line 110, in forward
sample = self.conv_in(sample)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/Applications/InvokeAI/.venv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (c10::Half) and bias type (float) should be the same
[2023-08-10 14:01:47,492]::[InvokeAI]::ERROR --> Error while invoking:
Input type (c10::Half) and bias type (float) should be the same
with new update sdxl base model, stopped working for me. even after re-adding model. so i dont know, i just dont wanna deal with this anymore. edit: tried some custom sdxl models, they stopped working as well
@dannyvfilms @etha302 try using the default VAE - for inpainting if you see that error. There's a fix coming in 3.0.2 for it
Closing this as it has been addressed in 3.1
For anyone seeing this, if you're still experiencing the issue, try clearing your /models/.cache folder and restarting Invoke.
I also encountered this problem. After a long investigation, I found out that the problem was caused by changing the version of transformers from 4.31.0 to 4.25.0 and changing the version of timm from 0.6.13 to 0.4.12. After I restored the versions of transformers and timm to the versions required by the invokeai project (that is, versions 4.31.0 and 0.6.13), the problem was solved.
Using 3.4.0post2, I have this problem currently using the RevAnimated 1.2.2 model.
Looks like I have: transformers-4.35.2 timm-0.6.13
How do I get the older versions of these to see if @zhangwenhao666 solution works for me?
Is there an existing issue for this?
OS
macOS
GPU
cpu
VRAM
16GB
What version did you experience this issue on?
3.0.1
What happened?
I'm encountering a persistent issue when trying to use the Unified Canvas in combination with various inpainting models. Specifically, every time I attempt to load an inpainting model onto the mask layer of the Unified Canvas, I receive the following error message: "Failed to load sd-1:main:unet model".
I've tested this problem across multiple inpainting models, including dreamshaper_8Inpainting.safetensors, epicrealism_pureEvolutionV3-inpainting.safetensors, and epicrealism_v10.0-inpainting.safetensors. For these models, the error persists regardless of whether they are installed through the UI or the terminal.
I've also attempted to clear some RAM to ensure that this wasn't a memory-related issue, but it had no effect on the problem at hand. I'm on a 2021 14" MacBook Pro with an M1 Pro processor and 16GB of unified memory.
Interestingly, the stable-diffusion-inpainting model seems to be the only inpainting model that works correctly with the Unified Canvas. I've attached the relevant logs to provide further details on this.
Screenshots
No response
Additional context
Logs of failed attempt with other inpainting models.
Contact Details
No response