invoke-ai / InvokeAI

InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
22.31k stars 2.32k forks source link

[bug]: unable to use second GPU `cuda:1` #6010

Open notdanilo opened 3 months ago

notdanilo commented 3 months ago

Is there an existing issue for this problem?

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

cuda:0 RTX 4090 cuda:1 RTX 3080

GPU VRAM

cuda:0 24GB cuda:1 10GB

Version number

3.7.0

Browser

Chrome

Python dependencies

No response

What happened

Running it with --device "cuda:1" isn't working.

[2024-03-20 19:59:39,781]::[uvicorn.access]::INFO --> 127.0.0.1:51656 - "GET /api/v1/queue/default/status HTTP/1.1" 200
  5%|████▏                                                                              | 1/20 [00:00<00:07,  2.64it/s]
[2024-03-20 19:59:44,088]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 134, in __process
    outputs = invocation.invoke_internal(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 669, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\app\invocations\latent.py", line 773, in invoke
    ) = pipeline.latents_from_embeddings(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 381, in latents_from_embeddings
    latents, attention_map_saver = self.generate_latents_from_embeddings(
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 454, in generate_latents_from_embeddings
    step_output = self.step(
                  ^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 587, in step
    uc_noise_pred, c_noise_pred = self.invokeai_diffuser.do_unet_step(
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusion\shared_invokeai_diffusion.py", line 257, in do_unet_step
    ) = self._apply_standard_conditioning_sequentially(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusion\shared_invokeai_diffusion.py", line 445, in _apply_standard_conditioning_sequentially
    unconditioned_next_x = self.model_forward_callback(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 664, in _unet_forward
    return self.unet(
           ^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1081, in forward
    sample = self.conv_in(sample)
             ^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\AI\Text-to-Image\Stable Diffusion\Applications\InvokeAI\.venv\Lib\site-packages\invokeai\backend\model_management\seamless.py", line 17, in _conv_forward_asymmetric
    return nn.functional.conv2d(
           ^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)

[2024-03-20 19:59:44,088]::[InvokeAI]::ERROR --> Error while invoking:
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)

What you expected to happen

To be able to run it on cuda:1

How to reproduce the problem

No response

Additional context

It looks like both cuda:0 and cuda:1 are being used, even if I try to select cuda:0 Maybe some code parts of the code is harcoded to cuda:0?

Discord username

not.danilo

psychedelicious commented 3 months ago

@lstein I suspect this will still be a problem on v4.0.0. Not sure how to approach this myself...

psychedelicious commented 3 months ago

There's a partial fix in #6076, which will be in v4.0.0 or v4.0.1. You should be able to generate without seamless enabled with this fix, but if you enable seamless, I'd expect the same error.

psychedelicious commented 3 months ago

Sorry, GitHub closed this but I didn't mean to.

lstein commented 3 months ago

What happens if you set the CUDA_VISIBLE_DEVICES environment variable to cuda:1 instead of using —device?

CUDA_VISIBLE_DEVICES=“cuda:1” invokeai-web
notdanilo commented 3 months ago

I am unable to test it for the following weeks.

notdanilo commented 2 months ago

Changing

python .venv\Scripts\invokeai-web.exe %*

to

set CUDA_VISIBLE_DEVICES=1 & python .venv\Scripts\invokeai-web.exe %*

worked on Windows.