Closed drzoidberg90 closed 1 year ago
I have the same issue. while following Arki's Guides. https://stablediffusionguides.carrd.co/#invoke-ai
Windows 11 (22H2)
cuda
8GB
Output:
$ python scripts/invoke.py --web
* Initializing, be patient...
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cpu
>> Loading stable-diffusion-1.4 from models/ldm/stable-diffusion-v1/model.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
NOTE: Redirects are currently not supported in Windows or MacOs.
| Using more accurate float32 precision
>> Model loaded in 10.47s
>> Setting Sampler to k_lms
* --web was specified, starting web server...
>> Started Invoke AI Web Server!
>> Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address.
>> Point your browser at http://127.0.0.1:9090
Windows 11 Ryzen 5600x 32GB RAM RTX 3070
Same issue here Windows 11 22H2 32GB RAM Ryzen 6900HS Nvidia 3070 Ti 8GB
Tried under WSL2 and Ubuntu and same issue.
Same root cause as https://github.com/invoke-ai/InvokeAI/issues/1264 Though the title on this Issue is superior.
I "fixed" mine by forcing the pip versions back to the known working ones: https://github.com/invoke-ai/InvokeAI/issues/1264#issuecomment-1293572653
We need @mauwii or someone else to do a code change ;)
Totally the same problem as my #1282 except people actually give a damn lol.
The #1264 method didn't work for me, but I am on Debian and could not find the file where I should replace SIGKILL, seems the file structure is different in linux?
The #1264 method didn't work for me, but I am on Debian and could not find the file where I should replace SIGKILL, seems the file structure is different in linux?
for linux you have to find python library path, use conda info --envs
to see path of the "invokeai" environment
you can then locate lib/python3.10/site-packages/torch/distributed/elastic/timer/file_based_local_timer.py
edit: it still not work for me on Fedora
edit 2: i confirm that @UnicodeTreason's workaround does work
Can confirm the above work around worked, I had to delete and recreate the environment after but it is using cuda now.
the newest commit fix the issue! tested on centos
the newest commit fix the issue! tested on centos
You mean the new release? Do I need to do a fresh reinstall of Invokeai?
You mean the new release?
the latest release just got out hours after my previous comment, you should try it
Do I need to do a fresh reinstall of Invokeai?
for me yes, because i still got the problem in #1015
FUCK YEAH, Fixed the CUDA, now model thingamajiggy is FORKED!
lol glad it's working for you guys!
Is there an existing issue for this?
OS
Windows
GPU
cuda
VRAM
8GB
What happened?
Running default setup of invokeai using the Conda Install (https://invoke-ai.github.io/InvokeAI/installation/INSTALL_WINDOWS/). Am able to successfully generate images but getting >>using device_type cpu.
I have installed CUDA drivers from Nvidia, updated my GeForce to the latest drivers, and repulled from Git and rebuilt the environment after driver installation.
Specs:
Screenshots
No response
Additional context
No response
Contact Details
No response