magic-research / magic-animate

[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
https://showlab.github.io/magicanimate/
BSD 3-Clause "New" or "Revised" License
10.5k stars 1.08k forks source link

It's just not working on windows #147

Closed nitinmukesh closed 9 months ago

nitinmukesh commented 9 months ago

Hello,

I have installed it successfully on Windows following the guide https://github.com/andygock/magic-animate/blob/main/INSTALL-Windows.md

I launced and provided the inputs but no progress, it is stuck at 0%

image

Console

C:\stable_diffusion\magic-animate\magicanimate\pipelines\pipeline_animation.py:43: FutureWarning: Importing `DiffusionPipeline` or `ImagePipelineOutput` from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead.
  from diffusers.pipeline_utils import DiffusionPipeline
Initializing MagicAnimate Pipeline...
loaded temporal unet's pretrained weights from pretrained_models/stable-diffusion-v1-5\unet ...
### missing keys: 560;
### unexpected keys: 0;
### Temporal Module Parameters: 417.1376 M
The config attributes {'addition_embed_type': None, 'addition_embed_type_num_heads': 64, 'addition_time_embed_dim': None, 'conditioning_channels': 3, 'encoder_hid_dim': None, 'encoder_hid_dim_type': None, 'global_pool_conditions': False, 'num_attention_heads': None, 'transformer_layers_per_block': 1} were passed to ControlNetModel, but are not expected and will be ignored. Please verify your config.json configuration file.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
C:\stable_diffusion\magic-animate\magicanimate\pipelines\pipeline_animation.py:103: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
  "_class_name": "DDIMScheduler",
  "_diffusers_version": "0.21.4",
  "beta_end": 0.012,
  "beta_schedule": "linear",
  "beta_start": 0.00085,
  "clip_sample": true,
  "clip_sample_range": 1.0,
  "dynamic_thresholding_ratio": 0.995,
  "num_train_timesteps": 1000,
  "prediction_type": "epsilon",
  "rescale_betas_zero_snr": false,
  "sample_max_value": 1.0,
  "set_alpha_to_one": true,
  "steps_offset": 0,
  "thresholding": false,
  "timestep_spacing": "leading",
  "trained_betas": null
}
 is outdated. `steps_offset` should be set to 1 instead of 0. Please make sure to update the config accordingly as leaving `steps_offset` might led to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
C:\stable_diffusion\magic-animate\magicanimate\pipelines\pipeline_animation.py:116: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
  "_class_name": "DDIMScheduler",
  "_diffusers_version": "0.21.4",
  "beta_end": 0.012,
  "beta_schedule": "linear",
  "beta_start": 0.00085,
  "clip_sample": true,
  "clip_sample_range": 1.0,
  "dynamic_thresholding_ratio": 0.995,
  "num_train_timesteps": 1000,
  "prediction_type": "epsilon",
  "rescale_betas_zero_snr": false,
  "sample_max_value": 1.0,
  "set_alpha_to_one": true,
  "steps_offset": 1,
  "thresholding": false,
  "timestep_spacing": "leading",
  "trained_betas": null
}
 has not set the configuration `clip_sample`. `clip_sample` should be set to False in the configuration file. Please make sure to update the config accordingly as not setting `clip_sample` in the config might lead to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
Initialization Done!
Running on local URL:  http://127.0.0.1:7860

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
  File "C:\Program Files\Python310\lib\asyncio\events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "C:\Program Files\Python310\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
    self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
C:\stable_diffusion\magic-animate\magicanimate\pipelines\pipeline_animation.py:624: FutureWarning: Accessing config attribute `in_channels` directly via 'UNet3DConditionModel' object attribute is deprecated. Please access 'in_channels' over 'UNet3DConditionModel's config object instead, e.g. 'unet.config.in_channels'.
  num_channels_latents = self.unet.in_channels
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]
nitinmukesh commented 9 months ago

I also tried downloading from this repo and it launches fine. But when I click on Animate it is stuck at 0%.. Any ideas what ,ay be wrong.

(venv) C:\stable_diffusion\magic-animate>run_gradio_demo
C:\stable_diffusion\magic-animate\magicanimate\pipelines\pipeline_animation.py:43: FutureWarning: Importing `DiffusionPipeline` or `ImagePipelineOutput` from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead.
  from diffusers.pipeline_utils import DiffusionPipeline
Initializing MagicAnimate Pipeline...
loaded temporal unet's pretrained weights from pretrained_models/stable-diffusion-v1-5\unet ...
### missing keys: 560;
### unexpected keys: 0;
### Temporal Module Parameters: 417.1376 M
The config attributes {'addition_embed_type': None, 'addition_embed_type_num_heads': 64, 'addition_time_embed_dim': None, 'conditioning_channels': 3, 'encoder_hid_dim': None, 'encoder_hid_dim_type': None, 'global_pool_conditions': False, 'num_attention_heads': None, 'transformer_layers_per_block': 1} were passed to ControlNetModel, but are not expected and will be ignored. Please verify your config.json configuration file.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: partially initialized module 'triton' has no attribute '_C' (most likely due to a circular import)
C:\stable_diffusion\magic-animate\magicanimate\pipelines\pipeline_animation.py:103: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
  "_class_name": "DDIMScheduler",
  "_diffusers_version": "0.21.4",
  "beta_end": 0.012,
  "beta_schedule": "linear",
  "beta_start": 0.00085,
  "clip_sample": true,
  "clip_sample_range": 1.0,
  "dynamic_thresholding_ratio": 0.995,
  "num_train_timesteps": 1000,
  "prediction_type": "epsilon",
  "rescale_betas_zero_snr": false,
  "sample_max_value": 1.0,
  "set_alpha_to_one": true,
  "steps_offset": 0,
  "thresholding": false,
  "timestep_spacing": "leading",
  "trained_betas": null
}
 is outdated. `steps_offset` should be set to 1 instead of 0. Please make sure to update the config accordingly as leaving `steps_offset` might led to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
C:\stable_diffusion\magic-animate\magicanimate\pipelines\pipeline_animation.py:116: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
  "_class_name": "DDIMScheduler",
  "_diffusers_version": "0.21.4",
  "beta_end": 0.012,
  "beta_schedule": "linear",
  "beta_start": 0.00085,
  "clip_sample": true,
  "clip_sample_range": 1.0,
  "dynamic_thresholding_ratio": 0.995,
  "num_train_timesteps": 1000,
  "prediction_type": "epsilon",
  "rescale_betas_zero_snr": false,
  "sample_max_value": 1.0,
  "set_alpha_to_one": true,
  "steps_offset": 1,
  "thresholding": false,
  "timestep_spacing": "leading",
  "trained_betas": null
}
 has not set the configuration `clip_sample`. `clip_sample` should be set to False in the configuration file. Please make sure to update the config accordingly as not setting `clip_sample` in the config might lead to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
Initialization Done!
Running on local URL:  http://127.0.0.1:7860
C:\stable_diffusion\magic-animate\magicanimate\pipelines\pipeline_animation.py:624: FutureWarning: Accessing config attribute `in_channels` directly via 'UNet3DConditionModel' object attribute is deprecated. Please access 'in_channels' over 'UNet3DConditionModel's config object instead, e.g. 'unet.config.in_channels'.
  num_channels_latents = self.unet.in_channels
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]Running on public URL: https://a069f0b1a29dd433c5.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Space
Mo01iHt commented 5 months ago

Hi, I encountered the same problem, the progress bar always stayed at 0%. How did you solve it?

nitinmukesh commented 5 months ago

@Mo01iHt

I refered this https://youtu.be/O-MTqV7lapg