invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.34k stars 2.4k forks source link

[bug]: device_type defaulting to CPU instead of GPU #1322

Closed drzoidberg90 closed 1 year ago

drzoidberg90 commented 1 year ago

Is there an existing issue for this?

OS

Windows

GPU

cuda

VRAM

8GB

What happened?

Running default setup of invokeai using the Conda Install (https://invoke-ai.github.io/InvokeAI/installation/INSTALL_WINDOWS/). Am able to successfully generate images but getting >>using device_type cpu.

(invokeai) C:\Users\user\InvokeAI>python scripts\invoke.py --web
* Initializing, be patient...
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
**>> Using device_type cpu**
>> Loading stable-diffusion-1.4 from models/ldm/stable-diffusion-v1/model.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
NOTE: Redirects are currently not supported in Windows or MacOs.
   | Using more accurate float32 precision
>> Model loaded in 11.05s
>> Setting Sampler to k_lms

I have installed CUDA drivers from Nvidia, updated my GeForce to the latest drivers, and repulled from Git and rebuilt the environment after driver installation.

Specs:

Screenshots

No response

Additional context

No response

Contact Details

No response

obgr commented 1 year ago

I have the same issue. while following Arki's Guides. https://stablediffusionguides.carrd.co/#invoke-ai

OS

Windows 11 (22H2)

GPU

cuda

VRAM

8GB

Output:

$ python scripts/invoke.py --web
* Initializing, be patient...
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cpu
>> Loading stable-diffusion-1.4 from models/ldm/stable-diffusion-v1/model.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
NOTE: Redirects are currently not supported in Windows or MacOs.
   | Using more accurate float32 precision
>> Model loaded in 10.47s
>> Setting Sampler to k_lms

* --web was specified, starting web server...
>> Started Invoke AI Web Server!
>> Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address.
>> Point your browser at http://127.0.0.1:9090

Specs

Windows 11 Ryzen 5600x 32GB RAM RTX 3070

sveken commented 1 year ago

Same issue here Windows 11 22H2 32GB RAM Ryzen 6900HS Nvidia 3070 Ti 8GB

Tried under WSL2 and Ubuntu and same issue.

UnicodeTreason commented 1 year ago

Same root cause as https://github.com/invoke-ai/InvokeAI/issues/1264 Though the title on this Issue is superior.

I "fixed" mine by forcing the pip versions back to the known working ones: https://github.com/invoke-ai/InvokeAI/issues/1264#issuecomment-1293572653

We need @mauwii or someone else to do a code change ;)

SirToastalot commented 1 year ago

Totally the same problem as my #1282 except people actually give a damn lol.

SirToastalot commented 1 year ago

The #1264 method didn't work for me, but I am on Debian and could not find the file where I should replace SIGKILL, seems the file structure is different in linux?

phineas-pta commented 1 year ago

The #1264 method didn't work for me, but I am on Debian and could not find the file where I should replace SIGKILL, seems the file structure is different in linux?

for linux you have to find python library path, use conda info --envs to see path of the "invokeai" environment

you can then locate lib/python3.10/site-packages/torch/distributed/elastic/timer/file_based_local_timer.py

edit: it still not work for me on Fedora

edit 2: i confirm that @UnicodeTreason's workaround does work

sveken commented 1 year ago

Can confirm the above work around worked, I had to delete and recreate the environment after but it is using cuda now.

phineas-pta commented 1 year ago

the newest commit fix the issue! tested on centos

SirToastalot commented 1 year ago

the newest commit fix the issue! tested on centos

You mean the new release? Do I need to do a fresh reinstall of Invokeai?

phineas-pta commented 1 year ago

You mean the new release?

the latest release just got out hours after my previous comment, you should try it

Do I need to do a fresh reinstall of Invokeai?

for me yes, because i still got the problem in #1015

SirToastalot commented 1 year ago

FUCK YEAH, Fixed the CUDA, now model thingamajiggy is FORKED!

psychedelicious commented 1 year ago

lol glad it's working for you guys!