rupeshs / fastsdcpu

Fast stable diffusion on CPU
MIT License
1.01k stars 87 forks source link

RuntimeError: Tensor on device meta is not on the expected device cpu! #209

Closed Ce-daros closed 6 days ago

Ce-daros commented 1 week ago
Running on Windows platform
OS: Windows-10-10.0.22631-SP0
Processor: Intel64 Family 6 Model 170 Stepping 4, GenuineIntel
Using device : cpu
Found 9 LCM models in config/lcm-models.txt
Found 7 stable diffusion models in config/stable-diffusion-models.txt
Found 4 LCM-LoRA models in config/lcm-lora-models.txt
Found 7 OpenVINO LCM models in config/openvino-lcm-models.txt
C:\Users\spawn\.conda\envs\sd\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_5m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_5m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
  return register_model(fn_wrapper)
C:\Users\spawn\.conda\envs\sd\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_11m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_11m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
  return register_model(fn_wrapper)
C:\Users\spawn\.conda\envs\sd\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
  return register_model(fn_wrapper)
C:\Users\spawn\.conda\envs\sd\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_384 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_384. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
  return register_model(fn_wrapper)
C:\Users\spawn\.conda\envs\sd\lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_512 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_512. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
  return register_model(fn_wrapper)
Starting web UI mode
No lora models found, please add lora models to C:\Program Files\FastSD-CPU\lora_models directory
C:\Users\spawn\.conda\envs\sd\lib\site-packages\gradio\components\dropdown.py:179: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include:  or set allow_custom_value=True.
  warnings.warn(
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.21.0, however version 4.29.0 is available, please upgrade.
--------
Torch datatype : torch.float32
{'controlnet': None,
 'diffusion_task': 'text_to_image',
 'dirs': {'controlnet': 'C:\\Program Files\\FastSD-CPU\\controlnet_models',
          'lora': 'C:\\Program Files\\FastSD-CPU\\lora_models'},
 'guidance_scale': 1.0,
 'image_height': 512,
 'image_width': 512,
 'inference_steps': 6,
 'init_image': None,
 'lcm_lora': {'base_model_id': 'Lykon/dreamshaper-8',
              'lcm_lora_id': 'latent-consistency/lcm-lora-sdv1-5'},
 'lcm_model_id': 'D:\\Program Files\\FastSD-CPU\\models\\Autismmix_Lightning',
 'lora': {'enabled': False,
          'fuse': True,
          'models_dir': 'C:\\Program Files\\FastSD-CPU\\lora_models',
          'path': '',
          'weight': 0.5},
 'negative_prompt': '',
 'number_of_images': 1,
 'openvino_lcm_model_id': 'rupeshs/sd-turbo-openvino',
 'prompt': '',
 'rebuild_pipeline': False,
 'seed': 123123,
 'strength': 0.6,
 'use_lcm_lora': False,
 'use_offline_model': True,
 'use_openvino': False,
 'use_safety_checker': True,
 'use_seed': False,
 'use_tiny_auto_encoder': False}
***** Init LCM Model pipeline - D:\Program Files\FastSD-CPU\models\Autismmix_Lightning *****
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:04<00:00,  1.65it/s]
Pipeline : StableDiffusionXLPipeline {
  "_class_name": "StableDiffusionXLPipeline",
  "_diffusers_version": "0.27.2",
  "_name_or_path": "D:\\Program Files\\FastSD-CPU\\models\\Autismmix_Lightning",
  "feature_extractor": [
    null,
    null
  ],
  "force_zeros_for_empty_prompt": true,
  "image_encoder": [
    null,
    null
  ],
  "scheduler": [
    "diffusers",
    "EulerDiscreteScheduler"
  ],
  "text_encoder": [
    "transformers",
    "CLIPTextModel"
  ],
  "text_encoder_2": [
    "transformers",
    "CLIPTextModelWithProjection"
  ],
  "tokenizer": [
    "transformers",
    "CLIPTokenizer"
  ],
  "tokenizer_2": [
    "transformers",
    "CLIPTokenizer"
  ],
  "unet": [
    "diffusers",
    "UNet2DConditionModel"
  ],
  "vae": [
    "diffusers",
    "AutoencoderKL"
  ]
}

Active adapters : []
Traceback (most recent call last):
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\gradio\queueing.py", line 501, in call_prediction
    output = await route_utils.call_process_api(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\gradio\route_utils.py", line 253, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\gradio\blocks.py", line 1695, in process_api
    result = await self.call_function(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\gradio\blocks.py", line 1235, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\anyio\_backends\_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\anyio\_backends\_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\gradio\utils.py", line 692, in wrapper
    response = f(*args, **kwargs)
  File "D:\Program Files\FastSD-CPU\src\frontend\webui\text_to_image_ui.py", line 54, in generate_text_to_image
    images = future.result()
  File "C:\Users\spawn\.conda\envs\sd\lib\concurrent\futures\_base.py", line 458, in result
    return self.__get_result()
  File "C:\Users\spawn\.conda\envs\sd\lib\concurrent\futures\_base.py", line 403, in __get_result
    raise self._exception
  File "C:\Users\spawn\.conda\envs\sd\lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "D:\Program Files\FastSD-CPU\src\context.py", line 59, in generate_text_to_image
    images = self.lcm_text_to_image.generate(
  File "D:\Program Files\FastSD-CPU\src\backend\lcm_text_to_image.py", line 355, in generate
    result_images = self.pipeline(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 1054, in __call__
    ) = self.encode_prompt(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 383, in encode_prompt
    prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\transformers\models\clip\modeling_clip.py", line 1216, in forward
    text_outputs = self.text_model(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\transformers\models\clip\modeling_clip.py", line 711, in forward
    encoder_outputs = self.encoder(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\transformers\models\clip\modeling_clip.py", line 638, in forward
    layer_outputs = encoder_layer(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\transformers\models\clip\modeling_clip.py", line 380, in forward
    hidden_states, attn_weights = self.self_attn(
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\transformers\models\clip\modeling_clip.py", line 269, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
    return F.linear(input, self.weight, self.bias)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_prims_common\wrappers.py", line 252, in _fn
    result = fn(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_decomp\decompositions.py", line 72, in inner
    r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs))
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_decomp\decompositions.py", line 1431, in addmm
    return out + beta * self
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_prims_common\wrappers.py", line 252, in _fn
    result = fn(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_prims_common\wrappers.py", line 137, in _fn
    result = fn(**bound.arguments)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_refs\__init__.py", line 1091, in add
    output = prims.add(a, b)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_ops.py", line 594, in __call__
    return self_._op(*args, **kwargs)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_prims\__init__.py", line 359, in _prim_elementwise_meta
    utils.check_same_device(*args_, allow_cpu_scalar_tensors=True)
  File "C:\Users\spawn\.conda\envs\sd\lib\site-packages\torch\_prims_common\__init__.py", line 740, in check_same_device
    raise RuntimeError(msg)
RuntimeError: Tensor on device meta is not on the expected device cpu!

I use Intel ultra 9 185H

rupeshs commented 1 week ago

@Ce-daros I don't have Intel ultra 9 185H , please reinstall and try. Or in the command prompt run this before running start.bat set DEVICE=NPU (Use openvino mode).