I was running the generate button on the stable_diffusion_webui_colab stable branch got this error. I used the .png info to get prompts I previously used and when it loaded I got an error badge on the Pos and Neg prompt fields. RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Colab cell output
Error completing request
Arguments: ('task(ghtpy7528bg8y9r)', 'Jiwon,girl jumping off a building, (wind in hair:1.1), smiling ,white top, (tight skirt:1.1), (nice boobs:1.1),short hair, blush, full body, long legs, nice booty,blue hair, lips, best quality, ((smooth thighs)) (8k,RAW photo, best quality,masterpiece:1.2),(realistic,photo-realistic:1.5),ultra-detailed, 50mm lens, dslr, soft lighting, high quality, film grain, Fujifilm, stunning quality, highres,cleavage,athletic body,sweat, skindentation', 'out of frame, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature, cgi, octane, render\n', [], 100, 0, True, False, 1, 1, 25, 3703240188.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, True, False, 0, -1, False, '', 0, False, False, 'LoRA', 'fromisJiwon(215cd9be181e)', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.ui.controlnet_ui_group.UiControlNetUnit object at 0x7fc114d94af0>, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 503, in process_images
res = process_images_inner(p)
File "/content/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/content/stable-diffusion-webui/modules/processing.py", line 642, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "/content/stable-diffusion-webui/modules/processing.py", line 587, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "/content/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 135, in encode
return self(text)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 125, in forward
outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer == "hidden")
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
>>> last_hidden_state = outputs.last_hidden_state
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 708, in forward
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 223, in forward
if position_ids is None:
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Error running process: /content/stable-diffusion-webui/extensions/asymmetric-tiling-sd-webui/scripts/asymmetric_tiling.py
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/scripts.py", line 417, in process
script.process(p, *script_args)
File "/content/stable-diffusion-webui/extensions/asymmetric-tiling-sd-webui/scripts/asymmetric_tiling.py", line 52, in process
self.__restoreConv2DMethods()
File "/content/stable-diffusion-webui/extensions/asymmetric-tiling-sd-webui/scripts/asymmetric_tiling.py", line 74, in __restoreConv2DMethods
for layer in modules.sd_hijack.model_hijack.layers:
TypeError: 'NoneType' object is not iterable
Error completing request
Arguments: ('task(aeron0tmwrvoquz)', 'Jiwon,girl jumping off a building, (wind in hair:1.1), smiling ,white top, (tight skirt:1.1), (nice boobs:1.1),short hair, blush, full body, long legs, nice booty,blue hair, lips, best quality, ((smooth thighs)) (8k,RAW photo, best quality,masterpiece:1.2),(realistic,photo-realistic:1.5),ultra-detailed, 50mm lens, dslr, soft lighting, high quality, film grain, Fujifilm, stunning quality, highres,cleavage,athletic body,sweat, skindentation', 'out of frame, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature, cgi, octane, render\n', [], 30, 0, True, False, 1, 1, 25, 3703240188.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, True, False, 0, -1, False, '', 0, False, False, 'LoRA', 'fromisJiwon(215cd9be181e)', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.ui.controlnet_ui_group.UiControlNetUnit object at 0x7fc114d55450>, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, None, False, 50) {}
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 503, in process_images
res = process_images_inner(p)
File "/content/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/content/stable-diffusion-webui/modules/processing.py", line 642, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "/content/stable-diffusion-webui/modules/processing.py", line 587, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "/content/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 135, in encode
return self(text)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 125, in forward
outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer == "hidden")
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
>>> last_hidden_state = outputs.last_hidden_state
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 708, in forward
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 223, in forward
if position_ids is None:
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Which colab and model(s) were you using when the error occurred?
What happened?
I was running the generate button on the stable_diffusion_webui_colab stable branch got this error. I used the .png info to get prompts I previously used and when it loaded I got an error badge on the Pos and Neg prompt fields.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Colab cell output
Which colab and model(s) were you using when the error occurred?
https://colab.research.google.com/github/camenduru/stable-diffusion-webui-colab/blob/main/stable/stable_diffusion_webui_colab.ipynb
https://civitai.com/api/download/models/48388
Which Public WebUI Colab URL were you using when the error occurred?
gradio.live
If you used HiRes mode when the error occurred, please provide the Hires info
No response