invoke-ai / InvokeAI

InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
22.78k stars 2.35k forks source link

[bug]: Error in Web UI: Model Conversion Failed: 3dCartoonVision_v10 (2) #6629

Open nekrasovdmitriy opened 1 month ago

nekrasovdmitriy commented 1 month ago

Is there an existing issue for this problem?

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

RTX a4000

GPU VRAM

16gb

Version number

4.2.5

Browser

Safari 17.5

Python dependencies

accelerate 0.30.1
compel 2.0.2
cuda 12.1
diffusers 0.27.2
numpy 1.26.4
opencv 4.9.0.80
onnx 1.15.0
pillow 10.4.0
python 3.11.9
torch 2.2.2+cu121
torchvision 0.17.2
transformers 4.41.1
xformers 0.0.25.post1

What happened

Click "convert" button in models list to convert it to diffusers format.

Error in Web UI: Model Conversion Failed: 3dCartoonVision_v10 (2)

[2024-07-16 10:09:34,347]::[uvicorn.access]::INFO --> 192.168.1.70:64653 - "PUT /api/v2/models/convert/ce6cb1ae-1754-49bd-8dd7-65bba24bb6f5 HTTP/1.1" 500
[2024-07-16 10:09:34,347]::[uvicorn.error]::ERROR --> Exception in ASGI application

Traceback (most recent call last):
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 412, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response
    await self.app(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/fastapi_events/middleware.py", line 43, in __call__
    await self.app(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/invokeai/.venv/lib/python3.11/site-packages/invokeai/app/api/routers/model_manager.py", line 760, in convert_model
    assert cache_path.exists()
           ^^^^^^^^^^^^^^^^^^^
AssertionError

What you expected to happen

Model conversion to diffusers

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

psychedelicious commented 1 month ago

@lstein We may have some regressions related to on-demand diffusers conversions. I imagine we just need to add a temp dir.

That said, I wonder if there is any particular reason to support this moving forward. What's the functional use-case, now that we support single-file loading? Maybe we should remove the functionality.