Closed stdNullPtr closed 9 months ago
Does this error only occur after multiple renders or directly after the start of Fooocus?
Do you by chance use --always-high-vram
?
Please post the FULL terminal output, from start command to the error.
Does this error only occur after multiple renders or directly after the start of Fooocus? Do you by chance use
--always-high-vram
? Please post the FULL terminal output, from start command to the error.
Directly after start, every time. Inpaint with an image input works btw. I am not using that parameter. I will be able to post the full log tomorrow.
If it helps, my system is ryzen 7 3700x, amd rx7800xt, 32gb ram
just reading this, I'm getting this error when trying to do a "faceswap" for the first time. I'm also using an AMD GPU, mine is the Radeon RX 6900 XT, and an AMD Ryzen Threadripper 3960x 24-core CPU.
Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu!
Anything I can try to get past this error?
just reading this, I'm getting this error when trying to do a "faceswap" for the first time. I'm also using an AMD GPU, mine is the Radeon RX 6900 XT, and an AMD Ryzen Threadripper 3960x 24-core CPU.
Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu!
Anything I can try to get past this error?
Same issue for me, happens with both FaceSwap and ImagePrompt, everything else runs fine. Has been happening for about 8 hrs, before then it was fine? O_O
Im also on AMD, Ryzen 7-5800x, and Radeon 6900xt gpu using the directml flag, in windows 11
Hmmmm I tried the --always-cpu flag and I'm getting faceswaps, baby! It still seems to "offload" the model to the GPU at some point... try that!
@lllyasviel this kind of error keeps popping up, i assume only for AMD GPU users. Is this related to the improved VRAM handling for AMD GPUs, which has been implemented recently?
@mashb1t here is the FULL log, from run.bat > placing 1 image in image prompt > pressing generate
Microsoft Windows [Version 10.0.19045.3803] (c) Microsoft Corporation. All rights reserved.
H:\Programs\Fooocus_win64_2-1-831>run.bat
H:\Programs\Fooocus_win64_2-1-831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py', '--directml'] Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.857 Running on local URL: http://127.0.0.1:7865
To create a public link, set share=True
in launch()
.
Using directml with device:
Total VRAM 1024 MB, total RAM 32699 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: privateuseone
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 3463323323676320965
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
extra clip vision: ['vision_model.embeddings.position_ids']
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Fooocus] Image processing ...
Requested to load CLIPVisionModelWithProjection
Loading 1 new model
Requested to load Resampler
Loading 1 new model
loading in lowvram mode 64.0
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=2048, bias=True)
lowvram: loaded module regularly LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=False)
lowvram: loaded module regularly Linear(in_features=1280, out_features=2560, bias=False)
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=False)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=5120, bias=False)
lowvram: loaded module regularly Linear(in_features=5120, out_features=1280, bias=False)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=False)
lowvram: loaded module regularly Linear(in_features=1280, out_features=2560, bias=False)
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=False)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=5120, bias=False)
lowvram: loaded module regularly Linear(in_features=5120, out_features=1280, bias=False)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=False)
lowvram: loaded module regularly Linear(in_features=1280, out_features=2560, bias=False)
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=False)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=5120, bias=False)
lowvram: loaded module regularly Linear(in_features=5120, out_features=1280, bias=False)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=False)
lowvram: loaded module regularly Linear(in_features=1280, out_features=2560, bias=False)
lowvram: loaded module regularly Linear(in_features=1280, out_features=1280, bias=False)
lowvram: loaded module regularly LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly Linear(in_features=1280, out_features=5120, bias=False)
lowvram: loaded module regularly Linear(in_features=5120, out_features=1280, bias=False)
[Fooocus Model Management] Moving model(s) has taken 0.13 seconds
Traceback (most recent call last):
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 806, in worker
handler(task)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, *kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 647, in handler
task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\extras\ip_adapter.py", line 185, in preprocess
cond = image_proj_model.model(cond).to(device=ip_adapter.load_device, dtype=ip_adapter.dtype)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\extras\resampler.py", line 117, in forward
latents = attn(x, latents) + latents
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\extras\resampler.py", line 55, in forward
latents = self.norm2(latents)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu!
Total time: 16.87 seconds
Keyboard interruption in main thread... closing server.
Terminate batch job (Y/N)?
@mashb1t just confirmed with .\python_embeded\python.exe -s Fooocus\entry_with_update.py --always-cpu --directml it works as a temp workaround
I do not have AMD now but I post a possible fix. try 2.1.858 and let me know if it works. if it does not work then wait until I get AMD next time
No go sadly,
Different errors, here are the logs if it is of any help (--always-cpu still works on new build, but this is a different error this time!)
E:\Fooocus>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml --preset realistic Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py', '--directml', '--preset', 'realistic'] Loaded preset: E:\Fooocus\Fooocus\presets\realistic.json Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.858 Running on local URL: http://127.0.0.1:7865
To create a public link, set share=True
in launch()
.
Using directml with device:
Total VRAM 1024 MB, total RAM 65446 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: privateuseone
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or
speed issues try using: --attention-split
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra
{'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids',
'cond_stage_model.clip_l.logit_scale',
'cond_stage_model.clip_l.text_projection'}
Base model loaded:
E:\Fooocus\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors
Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors',
0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for
model
[E:\Fooocus\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors].
Loaded LoRA
[E:\Fooocus\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors]
for UNet
[E:\Fooocus\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors]
with 788 keys at weight 0.25.
Loaded LoRA
[E:\Fooocus\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors]
for CLIP
[E:\Fooocus\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors]
with 264 keys at weight 0.25.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.76 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or
127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 5973531860309548148
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
extra clip vision: ['vision_model.embeddings.position_ids']
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 12
[Fooocus] Initializing ...
[Fooocus] Loading models ...
model_type EPS
UNet ADM Dimension 0
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.logit_scale',
'cond_stage_model.clip_l.text_projection'}
Refiner model loaded:
E:\Fooocus\Fooocus\models\checkpoints\realisticVisionV60B1_v60B1VAE.safetensors
model_type EPS
UNet ADM Dimension 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra
{'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids',
'cond_stage_model.clip_l.logit_scale',
'cond_stage_model.clip_l.text_projection'}
Base model loaded:
E:\Fooocus\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors',
0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for
model
[E:\Fooocus\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA
[E:\Fooocus\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors]
for UNet
[E:\Fooocus\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors]
with 788 keys at weight 0.25.
Loaded LoRA
[E:\Fooocus\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors]
for CLIP
[E:\Fooocus\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors]
with 264 keys at weight 0.25.
Request to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors',
0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for
model
[E:\Fooocus\Fooocus\models\checkpoints\realisticVisionV60B1_v60B1VAE.safetensors].
Requested to load SDXLClipModel
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.63 seconds
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] perfect beautiful cyberpunk woman standing in dark
futuristic cyberpunk environment, innocent seductive expression, full body
in view, detailed facial expression, lighting and natural metering very
realistic, detailed background, analog film, light and shadow effects 32K
ultra-high image quality near perfect, volumetric lighting, neon lighting,
photo like image,accurate hands, backlit, best quality, super detailed,
realistic, looking at viewer, film still, high detail, ominous, intricate,
epic, mysterious,long messy hair pink blue highlights, real eyes, artistic,
sharp focus, modern fine classic cinematic composition, new, color, royal,
shiny, amazing deep colors, inspired, rich vivid, great symmetry, lucid
fantastic, pure brilliant, excellent balance
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] perfect beautiful cyberpunk woman standing in dark
futuristic cyberpunk environment, innocent seductive expression, full body
in view, detailed facial expression, lighting and natural metering very
realistic, detailed background, analog film, light and shadow effects 32K
ultra-high image quality near perfect, volumetric lighting, neon lighting,
photo like image,accurate hands, backlit, best quality, super detailed,
realistic, looking at viewer, film still, high detail, ominous, intricate,
epic, mysterious,long messy hair pink blue highlights, real eyes, artistic,
sharp focus, modern, new, color, fine classic, open composition,
professional, elegant, stunning, creative, attractive, cute, romantic,
pretty, illuminated, cool, friendly, generous
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Fooocus] Image processing ...
Detected 1 faces
Requested to load CLIPVisionModelWithProjection
Loading 1 new model
Requested to load Resampler
Loading 1 new model
loading in lowvram mode 64.0
Traceback (most recent call last):
File "E:\Fooocus\Fooocus\modules\async_worker.py", line 806, in worker
handler(task)
File
"E:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py",
line 115, in decorate_context
return func(*args, kwargs)
File
"E:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py",
line 115, in decorate_context
return func(*args, *kwargs)
File "E:\Fooocus\Fooocus\modules\async_worker.py", line 661, in handler
task[0] = ip_adapter.preprocess(cn_img,
ip_adapter_path=ip_adapter_face_path)
File
"E:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py",
line 115, in decorate_context
return func(args, kwargs)
File
"E:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py",
line 115, in decorate_context
return func(*args, kwargs)
File "E:\Fooocus\Fooocus\extras\ip_adapter.py", line 188, in preprocess
cond = image_proj_model.model(cond).to(device=ip_adapter.load_device,
dtype=ip_adapter.dtype)
File
"E:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py",
line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "E:\Fooocus\Fooocus\extras\resampler.py", line 117, in forward
latents = attn(x, latents) + latents
File
"E:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py",
line 1501, in _call_impl
return forward_call(args, kwargs)
File "E:\Fooocus\Fooocus\extras\resampler.py", line 60, in forward
kv_input = torch.cat((x, latents), dim=-2)
RuntimeError: tensor.device().type() == at::DeviceType::PrivateUse1
INTERNAL ASSERT FAILED at
"D:\a\_work\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\DMLTensor.cpp":31,
please report a bug to PyTorch. unbox expects Dml at::Tensor as inputs
Total time: 30.92 seconds
On Mon, Jan 1, 2024 at 12:39 AM lllyasviel @.***> wrote:
I do not have AMD now but I post a possible fix. try 2.1.858 and let me know if it works. if it does not work then wait until I get AMD next time
— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/1671#issuecomment-1872927743, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADAUZPTCB72ULRPP77RPFS3YMFFH7AVCNFSM6AAAAABBHZTPOGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZSHEZDONZUGM . You are receiving this because you commented.Message ID: @.***>
try 2.1.859 again
Works perfectly again for me now!!! Thank you very much for your time!
On Mon, Jan 1, 2024 at 1:39 AM lllyasviel @.***> wrote:
try 2.1.859 again
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
try 2.1.859 again
Ty bro
I am trying to use 2x images as an image prompt but when I press generate this is what I'm getting (I can generate just fine without image prompts):
Full console log:
[Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 3 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 1.5 [Parameters] Seed = 953753918774495193 [Fooocus] Downloading control models ... [Fooocus] Loading control models ... [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 6 - 30 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'} Base model loaded: H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors Request to load LoRAs [['None', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [H:\Programs\Fooocus_win64_2-1-831\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors]. Requested to load SDXLClipModel Loading 1 new model [Fooocus] Processing prompts ... [Fooocus] Encoding positive #1 ... [Fooocus] Encoding negative #1 ... [Fooocus] Image processing ... Traceback (most recent call last): File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 806, in worker handler(task) File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 647, in handler task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path) File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\extras\ip_adapter.py", line 185, in preprocess cond = image_proj_model.model(cond).to(device=ip_adapter.load_device, dtype=ip_adapter.dtype) File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\extras\resampler.py", line 117, in forward latents = attn(x, latents) + latents File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "H:\Programs\Fooocus_win64_2-1-831\Fooocus\extras\resampler.py", line 55, in forward latents = self.norm2(latents) File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward return F.layer_norm( File "H:\Programs\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu! Total time: 37.40 seconds