Open StellarBeing25 opened 1 month ago
With single-file (checkpoint) loading, diffusers still needs access to the models' configuration files. Previously, when we converted models, we used a local copy of these config files. With single-file loading, we are no longer referencing the local config files so diffusers is downloading them.
The latest diffusers release revises the single-file loading logic. I think we'll need to upgrade to the latest version, then review the new API to see what our options are.
This makes it completely ONLINE ONLY! The configs folder is right there local, ready to be used lol! Wasted a good hour+ trying to fix it. Please fix this, unusable until then!
@lstein Forgot to tag you - I think we should be able to fix this up pretty easily.
Still the same error! Cannot use offline at all!
InvokeAI demands internet connection to download config files that are already local! Every time you change the model!
setting 'legacy_config_dir' in 'invokeai.yaml' doesn't help, it still demands internet.
This big should be retitled to 'redundant yaml downloads , internet required'
Server Error (3) RuntimeError: Parent directory
is an error when you attempt to run two instances Invoke on different ports and attempt to use both GPUs. This is a new error as multiple GPUs was working in version 2 of InvokeAI
[2024-07-30 14:58:30,489]::[InvokeAI]::ERROR --> Error while invoking session b732c0b2-a589-4d0c-9e13-c91ad33f7964, invocation 89825c16-4255-4f98-8a85-384a25b56624 (noise): Parent directory C:\Users\james\invokeai\outputs\images\AIartist\tensors\tmpue9b9p26 does not exist. [2024-07-30 14:58:30,489]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "C:\Users\james\invokeai.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node output = invocation.invoke_internal(context=context, services=self._services) File "C:\Users\james\invokeai.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 289, in invoke_internal output = self.invoke(context) File "C:\Users\james\invokeai.venv\lib\site-packages\invokeai\app\invocations\noise.py", line 119, in invoke name = context.tensors.save(tensor=noise) File "C:\Users\james\invokeai.venv\lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 288, in save name = self._services.tensors.save(obj=tensor) File "C:\Users\james\invokeai.venv\lib\site-packages\invokeai\app\services\object_serializer\object_serializer_forward_cache.py", line 47, in save name = self._underlying_storage.save(obj) File "C:\Users\james\invokeai.venv\lib\site-packages\invokeai\app\services\object_serializer\object_serializer_disk.py", line 55, in save torch.save(obj, file_path) # pyright: ignore [reportUnknownMemberType] File "C:\Users\james\invokeai.venv\lib\site-packages\torch\serialization.py", line 628, in save with _open_zipfile_writer(f) as opened_zipfile: File "C:\Users\james\invokeai.venv\lib\site-packages\torch\serialization.py", line 502, in _open_zipfile_writer return container(name_or_buffer) File "C:\Users\james\invokeai.venv\lib\site-packages\torch\serialization.py", line 473, in init super().init(torch._C.PyTorchFileWriter(self.name)) RuntimeError: Parent directory C:\Users\james\invokeai\outputs\images\AIartist\tensors\tmpue9b9p26 does not exist.
@jameswan That's an entirely different problem. Please create your own issue. Note that Invoke v2 is ancient.
@someaccount1234 Yes, this is still a problem. We will close this issue when it is resolved.
If some more StackTrace is needed, I provided it in my (duplicated) issue: https://github.com/invoke-ai/InvokeAI/issues/6702
Thanks @TobiasReich I saw that.
This isn't a mysterious issue, the cause is very clear.
I experimented the other day with providing the config files we already have on disc but diffusers
couldn't load the tokenizer or text encoder. It's not obvious to me why.
It doesn't matter anyways, though, because diffusers
just refactored the API we are using to load models. Whatever issue I'm running into may well no longer exist. So we need to update the diffusers
dependency (one of our core deps), adapt to some other changes they have made, and then figure out how to provide the config files required.
Now, in the infinite canvas, every time a new image is uploaded, an internet connection is required. Maybe it's time to go back to the old version.
@MOzhi327 No, that's not how it works. There's no internet connection needed when using canvas. What makes you think an internet connection is required?
@psychedelicious Thank you for the reply. On my side, if the VPN is turned off, there is no way to load the model, as follows. When I turned on the VPN and generated the image, I could continue to generate without relying on the VPN. Today, I once turned off the VPN. Without adjusting any parameters, when generating the image, a prompt of network connection failure appeared again. (I just tested it again. I can still continue to generate after turning off the VPN. Maybe it was caused by other reasons before. Sorry.) Anyway, every time the model is loaded, an internet connection is required. This is very inconvenient for me. Because using a VPN will cause problems with the use of my other software, so I temporarily consider using the old version first.
@MOzhi327 Ok, thanks for clarifying. Yes, we know about the internet connectivity issue and will fix it.
@psychedelicious Thank you very much
The problem was introduced when we implemented single-file loading in v4.2.6 on 15 July 2024. We have a few large projects that are taking all contributors' time and which are both resource and technical blockers to resolving this issue.
You do not need to use single file loading in the first place. You can convert your checkpoint/safetensors models to diffusers before going offline (button in the model manager) and then there's no internet connection needed to generate.
@psychedelicious Thank you! It is useful, but if it is an external directory model, it will cost more additional disk space. For someone like me who mainly uses one or two models, it may not be a useful way for others who need to use many models.
The problem was introduced when we implemented single-file loading in v4.2.6 on 15 July 2024. We have a few large projects that are taking all contributors' time and which are both resource and technical blockers to resolving this issue.
You do not need to use single file loading in the first place. You can convert your checkpoint/safetensors models to diffusers before going offline (button in the model manager) and then there's no internet connection needed to generate.
THANK YOU.
Is there an existing issue for this problem?
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
GTX 1660
GPU VRAM
6GB
Version number
4.2.6
Browser
Firefox
Python dependencies
No response
What happened
I am using SD 1.5 models in safetensor format without converting them to diffusers.
Image generation fails with a server error when starting InvokeAI offline. Image generation starts only when connected to the internet for the first generation however subsequent generations can work offline until switching models. The error reappears when switching models while offline.
What you expected to happen
InvokeAI should work offline.
How to reproduce the problem
No response
Additional context
No response
Discord username
No response