Closed ivaxsirc closed 1 year ago
I am doing the test right now, I have seen the error you comment. It happened to me starting with the parameters -medvram --opt-split-attention --precision autocast In a normal startup (without parameters), at the moment, I have passed the preprocessing of the images. -updated- With 6 GB of vRAM I can't go any further (parameters don't seem to affect in this case) :( :(
Traceback (most recent call last): File "F:\stable-diffusion-webui\modules\ui.py", line 186, in f res = list(func(*args, **kwargs)) File "F:\stable-diffusion-webui\webui.py", line 64, in f res = func(*args, **kwargs) File "F:\stable-diffusion-webui\modules\hypernetworks\ui.py", line 33, in train_hypernetwork hypernetwork, filename = modules.hypernetworks.hypernetwork.train_hypernetwork(*args) File "F:\stable-diffusion-webui\modules\hypernetworks\hypernetwork.py", line 249, in train_hypernetwork loss = shared.sd_model(x.unsqueeze(0), cond)[0] File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "F:\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 879, in forward return self.p_losses(x, c, t, *args, **kwargs) File "F:\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 1014, in p_losses x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) File "F:\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 276, in q_sample return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
I am also receiving this error. why trying to train via the hypernetwork.
Assuming it is fixed with dreambooth extension functioning.
When I generate an image from webui taking as reference a file CKPT generated from the colab for dreambooth I get this error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select) Which may be due?
Thank you