IrisRainbowNeko / DreamArtist-stable-diffusion

stable diffusion webui with contrastive prompt tuning
873 stars 53 forks source link

RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) #16

Open zyddnys opened 1 year ago

zyddnys commented 1 year ago
Traceback (most recent call last):
  File "G:\workspace\DreamArtist-stable-diffusion\modules\ui.py", line 185, in f
    res = list(func(*args, **kwargs))
  File "G:\workspace\DreamArtist-stable-diffusion\webui.py", line 54, in f
    res = func(*args, **kwargs)
  File "G:\workspace\DreamArtist-stable-diffusion\modules\dream_artist\ui.py", line 36, in train_embedding
    embedding, filename = modules.dream_artist.cptuning.train_embedding(*args)
  File "G:\workspace\DreamArtist-stable-diffusion\modules\dream_artist\cptuning.py", line 436, in train_embedding
    output = shared.sd_model(x, c_in, scale=cfg_scale)
  File "C:\Users\unknown\miniconda3\envs\pytorch-1.13\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "G:\workspace\DreamArtist-stable-diffusion\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 879, in forward
    return self.p_losses(x, c, t, *args, **kwargs)
  File "G:\workspace\DreamArtist-stable-diffusion\modules\dream_artist\cptuning.py", line 286, in p_losses_hook
    logvar_t = self.logvar[t].to(self.device)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

Do you know why this is happening? I can fix this by changing that line to logvar_t = self.logvar.to(self.device)[t] but I don't know why self.logvar is not moved to GPU.

xITmasterx commented 1 year ago

Same, been having that problem here too.

JPPhoto commented 1 year ago

Try changing the line in question to logvar_t = self.logvar[t.cpu()].to(self.device) and see if that helps.

xITmasterx commented 1 year ago

Well, now I'm running into this kind of a problem now:

Got any ideas on how to solve it?


Arguments: ('Vex', '0.003', 1, '/content/gdrive/MyDrive/Images/AIVEX', 'dream_artist', 512, 704, 1500, 500, 500, '/content/gdrive/MyDrive/sd/stable-diffusion-webui/textual_inversion_templates/style_filewords.txt', True, False, '', '', 20, 0, 7, -1.0, 512, 512, '5.0', '', True, True, 1, 1, 1.0, 25.0, 1.0, 25.0, 0.9, 0.999, False, 1, False, '0.000005') {}
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/ui.py", line 30, in train_embedding
    embedding, filename = dream_artist.cptuning.train_embedding(*args)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/cptuning.py", line 543, in train_embedding
    loss.backward()
  File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 197, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper__convolution_backward)```
xITmasterx commented 1 year ago

More details with the problem: Somehow this error is thrown out whenever I enabled the "Train with reconstruction" option.

a-cold-bird commented 1 year ago

Well, now I'm running into this kind of a problem now:

Got any ideas on how to solve it?

Arguments: ('Vex', '0.003', 1, '/content/gdrive/MyDrive/Images/AIVEX', 'dream_artist', 512, 704, 1500, 500, 500, '/content/gdrive/MyDrive/sd/stable-diffusion-webui/textual_inversion_templates/style_filewords.txt', True, False, '', '', 20, 0, 7, -1.0, 512, 512, '5.0', '', True, True, 1, 1, 1.0, 25.0, 1.0, 25.0, 0.9, 0.999, False, 1, False, '0.000005') {}
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/ui.py", line 30, in train_embedding
    embedding, filename = dream_artist.cptuning.train_embedding(*args)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/cptuning.py", line 543, in train_embedding
    loss.backward()
  File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 197, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper__convolution_backward)```

i had meet this problem too,i uninstalled accelerate and this problem disappeared,but i didn't solve this problem"RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)",so sad,if anyone solve it,tell me,please,i am crazy about it:(