Open zyddnys opened 1 year ago
Same, been having that problem here too.
Try changing the line in question to logvar_t = self.logvar[t.cpu()].to(self.device)
and see if that helps.
Well, now I'm running into this kind of a problem now:
Got any ideas on how to solve it?
Arguments: ('Vex', '0.003', 1, '/content/gdrive/MyDrive/Images/AIVEX', 'dream_artist', 512, 704, 1500, 500, 500, '/content/gdrive/MyDrive/sd/stable-diffusion-webui/textual_inversion_templates/style_filewords.txt', True, False, '', '', 20, 0, 7, -1.0, 512, 512, '5.0', '', True, True, 1, 1, 1.0, 25.0, 1.0, 25.0, 0.9, 0.999, False, 1, False, '0.000005') {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/ui.py", line 30, in train_embedding
embedding, filename = dream_artist.cptuning.train_embedding(*args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/cptuning.py", line 543, in train_embedding
loss.backward()
File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper__convolution_backward)```
More details with the problem: Somehow this error is thrown out whenever I enabled the "Train with reconstruction" option.
Well, now I'm running into this kind of a problem now:
Got any ideas on how to solve it?
Arguments: ('Vex', '0.003', 1, '/content/gdrive/MyDrive/Images/AIVEX', 'dream_artist', 512, 704, 1500, 500, 500, '/content/gdrive/MyDrive/sd/stable-diffusion-webui/textual_inversion_templates/style_filewords.txt', True, False, '', '', 20, 0, 7, -1.0, 512, 512, '5.0', '', True, True, 1, 1, 1.0, 25.0, 1.0, 25.0, 0.9, 0.999, False, 1, False, '0.000005') {} Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f res = list(func(*args, **kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f res = func(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/ui.py", line 30, in train_embedding embedding, filename = dream_artist.cptuning.train_embedding(*args) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/cptuning.py", line 543, in train_embedding loss.backward() File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 197, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper__convolution_backward)```
i had meet this problem too,i uninstalled accelerate and this problem disappeared,but i didn't solve this problem"RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)",so sad,if anyone solve it,tell me,please,i am crazy about it:(
Do you know why this is happening? I can fix this by changing that line to
logvar_t = self.logvar.to(self.device)[t]
but I don't know why self.logvar is not moved to GPU.