Closed Fubu4u2 closed 8 months ago
Please link us to the specific inpainting model that is causing the problem
It is not any particular inpainting model. I am receiving the same error with any inpainting model I try and use. Even runway ml's sd15 from huggingface.
https://huggingface.co/runwayml/stable-diffusion-inpainting/blob/main/sd-v1-5-inpainting.ckpt
here is log when importing and attempting to convert sd1-5 inpainting ckpt
https://huggingface.co/runwayml/stable-diffusion-inpainting/blob/main/sd-v1-5-inpainting.ckpt
[2024-03-27 06:43:28,838]::[ModelInstallService]::INFO --> Registered sd-v1-5-inpainting.ckpt with id b4b10089-92db-4855-ac8d-f89385cb16ef [2024-03-27 06:43:28,868]::[InvokeAI]::WARNING --> 'int' object has no attribute 'startswith' [2024-03-27 06:43:29,107]::[ModelInstallService]::INFO --> 1 new models registered; 0 unregistered [2024-03-27 06:43:29,107]::[ModelInstallService]::INFO --> Scanning autoimport directory for new models [2024-03-27 06:43:29,118]::[ModelInstallService]::INFO --> 0 new models registered [2024-03-27 06:43:29,118]::[ModelInstallService]::INFO --> Model installer (re)initialized [2024-03-27 06:43:29,118]::[uvicorn.access]::INFO --> 172.71.166.253:0 - "PATCH /api/v2/models/sync HTTP/1.0" 204 [2024-03-27 06:43:29,305]::[uvicorn.access]::INFO --> 172.71.166.252:0 - "GET /api/v2/models/ HTTP/1.0" 200 [2024-03-27 06:43:37,721]::[ModelLoadService]::INFO --> Converting /workspace/invokeai/models/sd-1/main/sd-v1-5-inpainting.ckpt to diffusers format [2024-03-27 06:43:39,389]::[uvicorn.access]::INFO --> 172.71.166.253:0 - "PUT /api/v2/models/convert/b4b10089-92db-4855-ac8d-f89385cb16ef HTTP/1.0" 500 [2024-03-27 06:43:39,389]::[uvicorn.error]::ERROR --> Exception in ASGI application
Traceback (most recent call last): File "/workspace/invokeai/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 412, in run_asgi result = await app( # type: ignore[func-returns-value] File "/workspace/invokeai/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call return await self.app(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 24, in call await responder(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 44, in call await self.app(scope, receive, self.send_with_gzip) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 91, in call await self.simple_response(scope, receive, send, request_headers=headers) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 146, in simple_response await self.app(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/fastapi_events/middleware.py", line 43, in call await self.app(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 758, in call await self.middleware_stack(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 74, in app response = await func(request) File "/workspace/invokeai/venv/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app raw_response = await run_endpoint_function( File "/workspace/invokeai/venv/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(values) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/api/routers/model_manager.py", line 691, in convert_model model_manager.load.load_model(model_config, submodel_type=SubModelType.Scheduler) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/services/model_load/model_load_default.py", line 80, in load_model ).load_model(model_config, submodel_type) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 62, in load_model model_path = self._convert_if_needed(model_config, model_path, submodel_type) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 82, in _convert_if_needed return self._convert_model(config, model_path, cache_path) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py", line 90, in _convert_model convert_ckpt_to_diffusers( File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/convert_ckpt_to_diffusers.py", line 46, in convert_ckpt_to_diffusers pipe = download_from_original_stable_diffusion_ckpt(Path(checkpoint_path).as_posix(), kwargs) File "/workspace/invokeai/venv/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1472, in download_from_original_stable_diffusion_ckpt set_module_tensor_to_device(unet, param_name, "cpu", value=param) File "/workspace/invokeai/venv/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 348, in set_module_tensor_to_device raise ValueError( ValueError: Trying to set a tensor of shape torch.Size([320, 9, 3, 3]) in "weight" (which has shape torch.Size([320, 4, 3, 3])), this look incorrect. [2024-03-27 06:43:39,668]::[uvicorn.access]::INFO --> 172.71.166.252:0 - "GET /api/v2/models/ HTTP/1.0" 200 [2024-03-27 06:43:49,499]::[uvicorn.access]::INFO --> 172.71.166.252:0 - "GET /ws/socket.io/?EIO=4&transport=polling&t=Ov-fZFp&sid=hwH2oPBiIL6ulaIFAAAQ HTTP/1.0" 200 [2024-03-27 06:43:49,637]::[uvicorn.access]::INFO --> 172.71.166.252:0 - "POST /ws/socket.io/?EIO=4&transport=polling&t=Ov-ffJN&sid=hwH2oPBiIL6ulaIFAAAQ HTTP/1.0" 200
My guess is that model needs to be converted to diffusers format in order to inpaint. Thus inpainting is failing because models are still in ckpt format. If that is the case then the issue is really that inpainting models are failing to convert to diffussers.
https://huggingface.co/runwayml/stable-diffusion-inpainting/blob/main/sd-v1-5-inpainting.ckpt
attempts to convert inpaint models to diffusers results in
[2024-03-27 06:43:39,389]::[uvicorn.error]::ERROR --> Exception in ASGI application
and
ValueError: Trying to set a tensor of shape torch.Size([320, 9, 3, 3]) in "weight" (which has shape torch.Size([320, 4, 3, 3])), this look incorrect.
Thanks, there was a problem with the diffusers conversion logic, resolved in #6064.
happy to help
Is there an existing issue for this problem?
Operating system
Linux
GPU vendor
Nvidia (CUDA)
GPU model
RTX 4090
GPU VRAM
64Gb
Version number
v4.0.0rc5
Browser
Chrome Version 123.0.6312.59 (Official Build) (64-bit)
Python dependencies
{ "accelerate": "0.28.0", "compel": "2.0.2", "cuda": "12.1", "diffusers": "0.27.0", "numpy": "1.26.4", "opencv": "4.9.0.80", "onnx": "1.15.0", "pillow": "10.0.0", "python": "3.10.10", "torch": "2.2.1+cu121", "torchvision": "0.17.1", "transformers": "4.38.2", "xformers": "0.0.25" }
What happened
Attempting to convert inpainting models to diffusers failed
2024-03-26 18:11:07,096]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /openapi.json HTTP/1.0" 200 [2024-03-26 18:11:07,175]::[uvicorn.access]::INFO --> 172.71.166.186:0 - "GET /api/v1/images/i/2d4e9bd6-99b9-4c07-8c51-4e151a4c11e4.png/full HTTP/1.0" 200 [2024-03-26 18:11:08,655]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /api/v1/app/invocation_cache/status HTTP/1.0" 200 [2024-03-26 18:11:08,666]::[uvicorn.access]::INFO --> 172.71.166.186:0 - "GET /api/v1/queue/default/list HTTP/1.0" 200 [2024-03-26 18:11:09,619]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /api/v2/models/i/0031b928-9685-4ec2-b1d0-5d901bb821b6 HTTP/1.0" 200 [2024-03-26 18:11:19,776]::[ModelLoadService]::INFO --> Converting /workspace/invokeai/models/sd-1/main/m_ohwx1024_rv5.1_9250-inpainting.safetensors to diffusers format [2024-03-26 18:11:19,883]::[uvicorn.access]::INFO --> 172.71.166.186:0 - "PUT /api/v2/models/convert/4b9d44a7-3055-4f5a-a3d1-c645179f779a HTTP/1.0" 500 [2024-03-26 18:11:19,883]::[uvicorn.error]::ERROR --> Exception in ASGI application
Traceback (most recent call last): File "/workspace/invokeai/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 412, in run_asgi result = await app( # type: ignore[func-returns-value] File "/workspace/invokeai/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call return await self.app(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 24, in call await responder(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 44, in call await self.app(scope, receive, self.send_with_gzip) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 91, in call await self.simple_response(scope, receive, send, request_headers=headers) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 146, in simple_response await self.app(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/fastapi_events/middleware.py", line 43, in call await self.app(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 758, in call await self.middleware_stack(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/workspace/invokeai/venv/lib/python3.10/site-packages/starlette/routing.py", line 74, in app response = await func(request) File "/workspace/invokeai/venv/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app raw_response = await run_endpoint_function( File "/workspace/invokeai/venv/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(values) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/api/routers/model_manager.py", line 691, in convert_model model_manager.load.load_model(model_config, submodel_type=SubModelType.Scheduler) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/services/model_load/model_load_default.py", line 80, in load_model ).load_model(model_config, submodel_type) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 62, in load_model model_path = self._convert_if_needed(model_config, model_path, submodel_type) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 82, in _convert_if_needed return self._convert_model(config, model_path, cache_path) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py", line 90, in _convert_model convert_ckpt_to_diffusers( File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/convert_ckpt_to_diffusers.py", line 46, in convert_ckpt_to_diffusers pipe = download_from_original_stable_diffusion_ckpt(Path(checkpoint_path).as_posix(), kwargs) File "/workspace/invokeai/venv/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1472, in download_from_original_stable_diffusion_ckpt set_module_tensor_to_device(unet, param_name, "cpu", value=param) File "/workspace/invokeai/venv/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 348, in set_module_tensor_to_device raise ValueError( ValueError: Trying to set a tensor of shape torch.Size([320, 9, 3, 3]) in "weight" (which has shape torch.Size([320, 4, 3, 3])), this look incorrect.
didn't think much of it until I attempted to use said model in unified canvas which gave me a value error and the following
[2024-03-26 18:15:34,728]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "POST /ws/socket.io/?EIO=4&transport=polling&t=Ovx-OYB&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200 [2024-03-26 18:15:59,730]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /ws/socket.io/?EIO=4&transport=polling&t=Ovx-OYB.0&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200 [2024-03-26 18:16:00,025]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "POST /ws/socket.io/?EIO=4&transport=polling&t=Ovx-Uk8&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200 [2024-03-26 18:16:23,449]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /api/v2/models/i/4b9d44a7-3055-4f5a-a3d1-c645179f779a HTTP/1.0" 200 [2024-03-26 18:16:25,027]::[uvicorn.access]::INFO --> 172.71.166.186:0 - "GET /ws/socket.io/?EIO=4&transport=polling&t=Ovx-Uk9&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200 [2024-03-26 18:16:25,181]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "POST /api/v1/images/upload?image_category=general&is_intermediate=true HTTP/1.0" 201 [2024-03-26 18:16:25,182]::[uvicorn.access]::INFO --> 172.71.166.186:0 - "POST /ws/socket.io/?EIO=4&transport=polling&t=Ovx-avI&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200 [2024-03-26 18:16:25,432]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "POST /api/v1/images/upload?image_category=mask&is_intermediate=true HTTP/1.0" 201 [2024-03-26 18:16:25,597]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.0" 200 [2024-03-26 18:16:25,598]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /ws/socket.io/?EIO=4&transport=polling&t=Ovx-avI.0&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200 [2024-03-26 18:16:25,630]::[ModelLoadService]::INFO --> Converting /workspace/invokeai/models/sd-1/main/m_ohwx1024_rv5.1_9250-inpainting.safetensors to diffusers format [2024-03-26 18:16:25,732]::[InvokeAI]::ERROR --> Error while invoking session d3d2bdc0-c290-484b-bf5c-d5cb97e8b1dc, invocation 27381d4f-ddac-4937-b92f-eea1d62c9be7 (compel): Trying to set a tensor of shape torch.Size([320, 9, 3, 3]) in "weight" (which has shape torch.Size([320, 4, 3, 3])), this look incorrect. [2024-03-26 18:16:25,732]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 160, in _process outputs = self._invocation.invoke_internal( File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/invocations/baseinvocation.py", line 281, in invoke_internal output = self.invoke(context) File "/workspace/invokeai/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/invocations/compel.py", line 57, in invoke tokenizer_info = context.models.load(self.clip.tokenizer) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/services/shared/invocation_context.py", line 339, in load return self._services.model_manager.load.load_model(model, _submodel_type, self._data) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/app/services/model_load/model_load_default.py", line 80, in load_model ).load_model(model_config, submodel_type) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 62, in load_model model_path = self._convert_if_needed(model_config, model_path, submodel_type) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/load_default.py", line 82, in _convert_if_needed return self._convert_model(config, model_path, cache_path) File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py", line 90, in _convert_model convert_ckpt_to_diffusers( File "/workspace/invokeai/venv/lib/python3.10/site-packages/invokeai/backend/model_manager/convert_ckpt_to_diffusers.py", line 46, in convert_ckpt_to_diffusers pipe = download_from_original_stable_diffusion_ckpt(Path(checkpoint_path).as_posix(), kwargs) File "/workspace/invokeai/venv/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1472, in download_from_original_stable_diffusion_ckpt set_module_tensor_to_device(unet, param_name, "cpu", value=param) File "/workspace/invokeai/venv/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 348, in set_module_tensor_to_device raise ValueError( ValueError: Trying to set a tensor of shape torch.Size([320, 9, 3, 3]) in "weight" (which has shape torch.Size([320, 4, 3, 3])), this look incorrect.
[2024-03-26 18:16:25,736]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /ws/socket.io/?EIO=4&transport=polling&t=Ovx-b2P&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200 [2024-03-26 18:16:25,737]::[InvokeAI]::INFO --> Graph stats: d3d2bdc0-c290-484b-bf5c-d5cb97e8b1dc Node Calls Seconds VRAM Used main_model_loader 1 0.001s 3.135G clip_skip 1 0.000s 3.135G compel 1 0.103s 3.135G TOTAL GRAPH EXECUTION TIME: 0.104s TOTAL GRAPH WALL TIME: 0.104s RAM used by InvokeAI process: 11.14G (+0.028G) RAM used to load models: 0.00G VRAM in use: 3.135G RAM cache statistics: Model cache hits: 0 Model cache misses: 0 Models cached: 0 Models cleared from cache: 0 Cache high water mark: 0.00/0.00G
[2024-03-26 18:16:25,756]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /api/v1/queue/default/status HTTP/1.0" 200 [2024-03-26 18:16:25,823]::[uvicorn.access]::INFO --> 172.71.166.186:0 - "GET /api/v1/queue/default/list HTTP/1.0" 200 [2024-03-26 18:16:25,867]::[uvicorn.access]::INFO --> 172.71.166.186:0 - "GET /ws/socket.io/?EIO=4&transport=polling&t=Ovx-b4P&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200 [2024-03-26 18:16:26,027]::[uvicorn.access]::INFO --> 172.71.166.187:0 - "GET /ws/socket.io/?EIO=4&transport=polling&t=Ovx-b6V&sid=IKNJykJbTHzwFBkXAAAK HTTP/1.0" 200
This error is not specific to this particular model but to any inpainting model I attempt to use. I am running this on runpod but have never had an issue like this before. This is only my second time updating and using v4.0 and my first time updating to v4.05. Had previously updated to V4.02 with no problem
What you expected to happen
I expected the model to convert to diffusers as it has every time in the past and edit my image accordingly
How to reproduce the problem
start with runpod invokeai-v3.3 template
update to v4.05
install inpainting model
attempt to convert to diffusers
cry
attempt to inpaint
cry harder
Additional context
I am running this on runpod but have never had an issue like this before. This is only my second time updating and using v4.0 and my first time updating to v4.05. Had previously updated to V4.02 on runpod with no problem.
Discord username
No response