invoke-ai / InvokeAI

InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
22.87k stars 2.37k forks source link

[bug]: Error while invoking: Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect. #5297

Closed rugabunda closed 9 months ago

rugabunda commented 9 months ago

Is there an existing issue for this?

OS

Windows

GPU

cuda

VRAM

12

What version did you experience this issue on?

3.5.0 RC1

What happened?

Fresh install... loaded without error, received the following error upon invoking image:

[2023-12-14 21:11:43,323]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 104, in __process
    outputs = invocation.invoke_internal(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 669, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 315, in invoke
    c1, c1_pooled, ec1 = self.run_clip_compel(
                         ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 172, in run_clip_compel
    tokenizer_info = context.services.model_manager.get_model(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\services\model_manager\model_manager_default.py", line 112, in get_model
    model_info = self.mgr.get_model(
                 ^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\model_manager.py", line 490, in get_model
    model_path = model_class.convert_if_required(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\models\sdxl.py", line 122, in convert_if_required
    return _convert_ckpt_and_cache(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\models\stable_diffusion.py", line 289, in _convert_ckpt_and_cache
    convert_ckpt_to_diffusers(
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1728, in convert_ckpt_to_diffusers
    pipe = download_from_original_stable_diffusion_ckpt(checkpoint_path, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1406, in download_from_original_stable_diffusion_ckpt
    set_module_tensor_to_device(unet, param_name, "cpu", value=param)
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\accelerate\utils\modeling.py", line 285, in set_module_tensor_to_device
    raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect.

[2023-12-14 21:11:43,327]::[InvokeAI]::ERROR --> Error while invoking:
Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect.

Screenshots

No response

Additional context

Full log:

`Starting the InvokeAI browser-based UI..
[2023-12-14 21:11:20,252]::[InvokeAI]::INFO --> Patchmatch initialized
D:\AI\Image\InvokeAI\.venv\Lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(
[2023-12-14 21:11:24,255]::[uvicorn.error]::INFO --> Started server process [19960]
[2023-12-14 21:11:24,255]::[uvicorn.error]::INFO --> Waiting for application startup.
[2023-12-14 21:11:24,255]::[InvokeAI]::INFO --> InvokeAI version 3.5.0rc1
[2023-12-14 21:11:24,255]::[InvokeAI]::INFO --> Root directory = D:\AI\Image\InvokeAI
[2023-12-14 21:11:24,256]::[InvokeAI]::INFO --> Initializing database at D:\AI\Image\InvokeAI\databases\invokeai.db
[2023-12-14 21:11:24,258]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 3080 Ti
[2023-12-14 21:11:24,260]::[InvokeAI]::INFO --> Scanning D:\AI\Image\InvokeAI\models for new models
[2023-12-14 21:11:24,394]::[InvokeAI]::INFO --> Scanned 9 files and directories, imported 0 models
[2023-12-14 21:11:24,396]::[InvokeAI]::INFO --> Model manager service initialized
[2023-12-14 21:11:24,403]::[ModelInstallService]::INFO --> Checking for models that have been moved or deleted from disk
[2023-12-14 21:11:24,404]::[ModelInstallService]::INFO --> Scanning D:\AI\Image\InvokeAI\models for new and orphaned models
[2023-12-14 21:11:24,414]::[ModelInstallService]::INFO --> 0 new models registered; 0 unregistered
[2023-12-14 21:11:24,414]::[ModelInstallService]::INFO --> Scanning autoimport directory for new models
[2023-12-14 21:11:24,416]::[ModelInstallService]::INFO --> 0 new models registered
[2023-12-14 21:11:24,416]::[ModelInstallService]::INFO --> Model installer (re)initialized
[2023-12-14 21:11:24,419]::[InvokeAI]::INFO --> Pruned 6 finished queue items
[2023-12-14 21:11:24,432]::[InvokeAI]::INFO --> Cleaned database (freed 0.05MB)
[2023-12-14 21:11:24,432]::[uvicorn.error]::INFO --> Application startup complete.
[2023-12-14 21:11:24,432]::[uvicorn.error]::INFO --> Uvicorn running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2023-12-14 21:11:28,491]::[uvicorn.access]::INFO --> 127.0.0.1:65083 - "GET /socket.io/?EIO=4&transport=polling&t=Onhh1Dc HTTP/1.1" 200
[2023-12-14 21:11:28,497]::[uvicorn.error]::INFO --> ('127.0.0.1', 65084) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=1jtaVnPlCE3_E7HkAAAA" [accepted]
[2023-12-14 21:11:28,497]::[uvicorn.error]::INFO --> connection open
[2023-12-14 21:11:28,499]::[uvicorn.access]::INFO --> 127.0.0.1:65083 - "POST /socket.io/?EIO=4&transport=polling&t=Onhh1Dk&sid=1jtaVnPlCE3_E7HkAAAA HTTP/1.1" 200
[2023-12-14 21:11:28,501]::[uvicorn.access]::INFO --> 127.0.0.1:65085 - "GET /socket.io/?EIO=4&transport=polling&t=Onhh1Dn&sid=1jtaVnPlCE3_E7HkAAAA HTTP/1.1" 200
[2023-12-14 21:11:28,628]::[uvicorn.access]::INFO --> 127.0.0.1:65083 - "GET /api/v1/app/version HTTP/1.1" 200
[2023-12-14 21:11:28,631]::[uvicorn.access]::INFO --> 127.0.0.1:65085 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2023-12-14 21:11:28,957]::[InvokeAI]::INFO --> NSFW checker initialized
[2023-12-14 21:11:28,957]::[uvicorn.access]::INFO --> 127.0.0.1:65086 - "GET /api/v1/app/config HTTP/1.1" 200
[2023-12-14 21:11:28,958]::[uvicorn.access]::INFO --> 127.0.0.1:65083 - "GET /api/v1/boards/?all=true HTTP/1.1" 200
[2023-12-14 21:11:28,958]::[uvicorn.access]::INFO --> 127.0.0.1:65087 - "GET /api/v1/app/invocation_cache/status HTTP/1.1" 200
[2023-12-14 21:11:28,959]::[uvicorn.access]::INFO --> 127.0.0.1:65088 - "GET /api/v1/queue/default/list HTTP/1.1" 200
[2023-12-14 21:11:28,962]::[uvicorn.access]::INFO --> 127.0.0.1:65089 - "GET /api/v1/models/?model_type=controlnet HTTP/1.1" 200
[2023-12-14 21:11:28,962]::[uvicorn.access]::INFO --> 127.0.0.1:65090 - "GET /api/v1/models/?model_type=lora HTTP/1.1" 200
[2023-12-14 21:11:28,963]::[uvicorn.access]::INFO --> 127.0.0.1:65092 - "GET /api/v1/models/?model_type=t2i_adapter HTTP/1.1" 200
[2023-12-14 21:11:28,964]::[uvicorn.access]::INFO --> 127.0.0.1:65093 - "GET /api/v1/models/?model_type=ip_adapter HTTP/1.1" 200
[2023-12-14 21:11:29,636]::[uvicorn.access]::INFO --> 127.0.0.1:65089 - "POST /api/v1/utilities/dynamicprompts HTTP/1.1" 200
[2023-12-14 21:11:39,981]::[uvicorn.error]::INFO --> connection closed
[2023-12-14 21:11:40,002]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET / HTTP/1.1" 200
[2023-12-14 21:11:40,026]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /index-9adb4e14.js HTTP/1.1" 304
[2023-12-14 21:11:40,113]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /ThemeLocaleProvider-5a0655f2.js HTTP/1.1" 304
[2023-12-14 21:11:40,115]::[uvicorn.access]::INFO --> 127.0.0.1:65211 - "GET /MantineProvider-93f645d7.js HTTP/1.1" 304
[2023-12-14 21:11:40,116]::[uvicorn.access]::INFO --> 127.0.0.1:65212 - "GET /ThemeLocaleProvider-0667edb8.css HTTP/1.1" 304
[2023-12-14 21:11:40,117]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /logo-13003d72.png HTTP/1.1" 304
[2023-12-14 21:11:40,166]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /en.json HTTP/1.1" 304
[2023-12-14 21:11:40,185]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /App-58ce9b98.js HTTP/1.1" 304
[2023-12-14 21:11:40,186]::[uvicorn.access]::INFO --> 127.0.0.1:65211 - "GET /App-6125620a.css HTTP/1.1" 304
[2023-12-14 21:11:40,300]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /inter-latin-wght-normal-88df0b5a.woff2 HTTP/1.1" 304
[2023-12-14 21:11:40,303]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /socket.io/?EIO=4&transport=polling&t=Onhh46E HTTP/1.1" 200
[2023-12-14 21:11:40,321]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /api/v1/app/version HTTP/1.1" 200
[2023-12-14 21:11:40,322]::[uvicorn.access]::INFO --> 127.0.0.1:65212 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2023-12-14 21:11:40,323]::[uvicorn.error]::INFO --> ('127.0.0.1', 65213) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=pUENaTpVibj3ZHoVAAAC" [accepted]
[2023-12-14 21:11:40,323]::[uvicorn.error]::INFO --> connection open
[2023-12-14 21:11:40,324]::[uvicorn.access]::INFO --> 127.0.0.1:65211 - "GET /api/v1/app/config HTTP/1.1" 200
[2023-12-14 21:11:40,325]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "POST /socket.io/?EIO=4&transport=polling&t=Onhh46T&sid=pUENaTpVibj3ZHoVAAAC HTTP/1.1" 200
[2023-12-14 21:11:40,327]::[uvicorn.access]::INFO --> 127.0.0.1:65214 - "GET /api/v1/app/invocation_cache/status HTTP/1.1" 200
[2023-12-14 21:11:40,328]::[uvicorn.access]::INFO --> 127.0.0.1:65215 - "GET /api/v1/queue/default/list HTTP/1.1" 200
[2023-12-14 21:11:40,329]::[uvicorn.access]::INFO --> 127.0.0.1:65212 - "GET /socket.io/?EIO=4&transport=polling&t=Onhh46T.0&sid=pUENaTpVibj3ZHoVAAAC HTTP/1.1" 200
[2023-12-14 21:11:40,330]::[uvicorn.access]::INFO --> 127.0.0.1:65211 - "GET /api/v1/models/?model_type=ip_adapter HTTP/1.1" 200
[2023-12-14 21:11:40,330]::[uvicorn.access]::INFO --> 127.0.0.1:65216 - "GET /api/v1/boards/?all=true HTTP/1.1" 200
[2023-12-14 21:11:40,331]::[uvicorn.access]::INFO --> 127.0.0.1:65217 - "GET /api/v1/models/?model_type=lora HTTP/1.1" 200
[2023-12-14 21:11:40,333]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "GET /api/v1/models/?model_type=controlnet HTTP/1.1" 200
[2023-12-14 21:11:40,333]::[uvicorn.access]::INFO --> 127.0.0.1:65219 - "GET /api/v1/models/?model_type=t2i_adapter HTTP/1.1" 200
[2023-12-14 21:11:40,343]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "GET /socket.io/?EIO=4&transport=polling&t=Onhh46s&sid=pUENaTpVibj3ZHoVAAAC HTTP/1.1" 200
[2023-12-14 21:11:40,357]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "GET /socket.io/?EIO=4&transport=polling&t=Onhh474&sid=pUENaTpVibj3ZHoVAAAC HTTP/1.1" 200
[2023-12-14 21:11:40,363]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "GET /socket.io/?EIO=4&transport=polling&t=Onhh47A&sid=pUENaTpVibj3ZHoVAAAC HTTP/1.1" 200
[2023-12-14 21:11:40,572]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "GET /openapi.json HTTP/1.1" 200
[2023-12-14 21:11:41,233]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "GET /api/v1/models/?model_type=embedding HTTP/1.1" 200
[2023-12-14 21:11:41,235]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /api/v1/models/?base_models=sd-1&base_models=sd-2&base_models=sdxl&model_type=main HTTP/1.1" 200
[2023-12-14 21:11:41,235]::[uvicorn.access]::INFO --> 127.0.0.1:65211 - "GET /api/v1/models/?base_models=sd-1&base_models=sd-2&base_models=sdxl&model_type=onnx HTTP/1.1" 200
[2023-12-14 21:11:41,235]::[uvicorn.access]::INFO --> 127.0.0.1:65212 - "GET /api/v1/models/?model_type=vae HTTP/1.1" 200
[2023-12-14 21:11:41,236]::[uvicorn.access]::INFO --> 127.0.0.1:65214 - "GET /api/v1/models/?base_models=sdxl-refiner&model_type=main HTTP/1.1" 200
[2023-12-14 21:11:41,237]::[uvicorn.access]::INFO --> 127.0.0.1:65215 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200
[2023-12-14 21:11:41,237]::[uvicorn.access]::INFO --> 127.0.0.1:65219 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=100&offset=0 HTTP/1.1" 200
[2023-12-14 21:11:41,238]::[uvicorn.access]::INFO --> 127.0.0.1:65217 - "GET /api/v1/images/?board_id=none&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200
[2023-12-14 21:11:41,376]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "POST /api/v1/utilities/dynamicprompts HTTP/1.1" 200
[2023-12-14 21:11:43,174]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200
[2023-12-14 21:11:43,205]::[uvicorn.access]::INFO --> 127.0.0.1:65218 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2023-12-14 21:11:43,212]::[uvicorn.access]::INFO --> 127.0.0.1:65210 - "GET /api/v1/queue/default/list HTTP/1.1" 200
[2023-12-14 21:11:43,221]::[InvokeAI]::INFO --> Converting d:\ai\image\sd\models\Stable-diffusion\sd_xl_base_1.0_0.9vae.safetensors to diffusers format
[2023-12-14 21:11:43,323]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 104, in __process
    outputs = invocation.invoke_internal(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 669, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 315, in invoke
    c1, c1_pooled, ec1 = self.run_clip_compel(
                         ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 172, in run_clip_compel
    tokenizer_info = context.services.model_manager.get_model(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\app\services\model_manager\model_manager_default.py", line 112, in get_model
    model_info = self.mgr.get_model(
                 ^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\model_manager.py", line 490, in get_model
    model_path = model_class.convert_if_required(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\models\sdxl.py", line 122, in convert_if_required
    return _convert_ckpt_and_cache(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\models\stable_diffusion.py", line 289, in _convert_ckpt_and_cache
    convert_ckpt_to_diffusers(
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1728, in convert_ckpt_to_diffusers
    pipe = download_from_original_stable_diffusion_ckpt(checkpoint_path, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1406, in download_from_original_stable_diffusion_ckpt
    set_module_tensor_to_device(unet, param_name, "cpu", value=param)
  File "D:\AI\Image\InvokeAI\.venv\Lib\site-packages\accelerate\utils\modeling.py", line 285, in set_module_tensor_to_device
    raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect.

[2023-12-14 21:11:43,327]::[InvokeAI]::ERROR --> Error while invoking:
Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect.

Contact Details

No response

Millu commented 9 months ago

What were your generation settings when you experienced this error? If you could share the model you were using that would be very helpful!

rugabunda commented 9 months ago

What were your generation settings when you experienced this error? If you could share the model you were using that would be very helpful!

default... if I remember correctly; it was a completely clean install; I believe I was using an imported SDXL and SDXL refiner... I tried both and just SDXL. Just updating to RC2 before I read this comment, so I will see what happens after its complete

rugabunda commented 9 months ago

Same problem....

default fresh install rc2

default generation settings, using imported sd_xl_base_1.0_0.9vae exclusively, 1 it, 50 step, 7.5 cfg, Euler, vae default, fp16/32

changing generation settings does not seem to change the error

rugabunda commented 9 months ago

Ok, solution: using stable-diffusion-xl-base-1-0 works, the imported sd_xl_base_1.0_0.9vae does not work.

rugabunda commented 9 months ago

I notice invoke downloads 12 gigs worth of sdxl model; including the tokenizer and text incoder; whereas a1111 only needs the base file; why is that? and is this why the vae version I downloaded does not work?

2023-12-18_19-52-00_UVJ79fUSUu

rugabunda commented 9 months ago

I see, when I imported the vae version, i had selected the wrong config file... i was using inference rather than sd_xl_base.yaml

rugabunda commented 9 months ago

changing that solved this problem.

curious though, why invoke downloads the tokenizer, text encoder (etc), what do they do? whereas most checkpoints are just a single file. is there a benefit to using invokes sdxl model over a single sdxl base checkpoint?