invoke-ai / InvokeAI

InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
22.28k stars 2.32k forks source link

[bug]: Converting checkpoint to diffusers fails #6514

Open raldone01 opened 3 weeks ago

raldone01 commented 3 weeks ago

Operating system

Linux (docker)

GPU vendor

Nvidia (CUDA)

Version number

a3cb5da130280d380cbb7bf4492f8cf50ebd060b

Browser

Firefox

What happened

When I try to convert one of my models to diffusers in the model manager it fails because there seems to be a mismatch in the convert_cache naming. The conversion seems to work.

❯ ls .convert_cache/
<folder> invokeai_models_sd-1_main_cyberrealistic_v5_fp32.safetensors

However the model manager is looking for /invokeai/models/.convert_cache/8b1c8220-9abd-481f-9827-be64ec67461a which does not exist.

I patched invokeai/app/api/routers/model_manager.py with the following:

cache_path = loader.convert_cache.cache_path(key)
+logger.info(f"CachePath: {cache_path}")
assert cache_path.exists()
invoke_ai-1  | [2024-06-14 15:11:22,532]::[InvokeAI]::INFO --> CachePath: /invokeai/models/.convert_cache/8b1c8220-9abd-481f-9827-be64ec67461a
invoke_ai-1  | [2024-06-14 15:11:22,533]::[uvicorn.access]::INFO --> 172.24.13.33:15583 - "PUT /api/v2/models/convert/8b1c8220-9abd-481f-9827-be64ec67461a HTTP/1.1" 500
invoke_ai-1  | [2024-06-14 15:11:22,533]::[uvicorn.error]::ERROR --> Exception in ASGI application
invoke_ai-1  | 
invoke_ai-1  | Traceback (most recent call last):
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 412, in run_asgi
invoke_ai-1  |     result = await app(  # type: ignore[func-returns-value]
invoke_ai-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
invoke_ai-1  |     return await self.app(scope, receive, send)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
invoke_ai-1  |     await super().__call__(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
invoke_ai-1  |     await self.middleware_stack(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
invoke_ai-1  |     raise exc
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
invoke_ai-1  |     await self.app(scope, receive, _send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 24, in __call__
invoke_ai-1  |     await responder(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 44, in __call__
invoke_ai-1  |     await self.app(scope, receive, self.send_with_gzip)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in __call__
invoke_ai-1  |     await self.simple_response(scope, receive, send, request_headers=headers)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response
invoke_ai-1  |     await self.app(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/fastapi_events/middleware.py", line 43, in __call__
invoke_ai-1  |     await self.app(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
invoke_ai-1  |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
invoke_ai-1  |     raise exc
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
invoke_ai-1  |     await app(scope, receive, sender)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
invoke_ai-1  |     await self.middleware_stack(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
invoke_ai-1  |     await route.handle(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
invoke_ai-1  |     await self.app(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
invoke_ai-1  |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
invoke_ai-1  |     raise exc
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
invoke_ai-1  |     await app(scope, receive, sender)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
invoke_ai-1  |     response = await func(request)
invoke_ai-1  |                ^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
invoke_ai-1  |     raw_response = await run_endpoint_function(
invoke_ai-1  |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
invoke_ai-1  |     return await dependant.call(**values)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/invokeai/invokeai/app/api/routers/model_manager.py", line 634, in convert_model
invoke_ai-1  |     assert cache_path.exists()
invoke_ai-1  | AssertionError
invoke_ai-1  | [2024-06-14 15:11:22,601]::[uvicorn.access]::INFO --> 172.24.13.33:15584 - "GET /api/v2/models/ HTTP/1.1" 200

Note: Converting with older releases works fine.

Sourdface commented 4 days ago

Can confirm. The real path that gets generated after conversion is based on the full path to the original checkpoint and does not contain the UUID(?) that the conversion script seems to be looking for.

A workaround is to convert, let it fail with the above error, then manually move the converted checkpoint folder to the normal location where Invoke places models of that type, then tell Invokeai to scan for models within its root, and finally install the model from its new location. The model appears to work like diffuser models normally work after that point.

raldone01 commented 3 days ago

I just downgrade invoke to convert diffusers. Just have to fix the schema version in the invoke.yml. I had no issues with DB migrations so far...