Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
Looking at the container logs it seems InvokeAI is stuck waiting for user input when it starts:
~ docker logs invokeai
/invokeai/invokeai.yaml exists. InvokeAI is already configured.
To reconfigure InvokeAI, delete the above file.
======================================================================
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
/opt/venv/invokeai/lib/python3.11/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Please select the scheduler prediction type of the checkpoint named realisticVisionV51_v51VAE.safetensors:
[1] "epsilon" - most v1.5 models and v2 models trained on 512 pixel images
[2] "vprediction" - v2 models trained on 768 pixel images and a few v1.5 models
[3] Accept the best guess; you can fix it in the Web UI later
select [3]> #
If I delete that model and restart the container it does the same for the next and I think every LoRA - the whole time while the web UI is offline.
So I think there are a few problems here:
Software running in a container should always assume the container is running non-interactively by default.
The application's webUI is offline while the cli is waiting for user input. As such there is no clue as to why the app isn't working unless you go to the servers console logs.
It seems odd for the LoRAs that have been there working for quite some time to suddenly require this input to work.
The error requiring input itself does not suggest to the user how to resolve this in a non-interactive environment (e.g. a link to a wiki doc, a hint to set an environment variable etc...).
Is there an existing issue for this?
OS
Linux
GPU
cuda
VRAM
24GB
What version did you experience this issue on?
3.6.0
What happened?
After updating to 3.6.0 InvokeAI fails to start.
Looking at the container logs it seems InvokeAI is stuck waiting for user input when it starts:
If I delete that model and restart the container it does the same for the next and I think every LoRA - the whole time while the web UI is offline.
So I think there are a few problems here:
Screenshots
No response
Additional context
No response
Contact Details
No response