magic-research / magic-animate

[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
https://showlab.github.io/magicanimate/
BSD 3-Clause "New" or "Revised" License
10.48k stars 1.07k forks source link

please fix depedency for windows #50

Closed EKI-INDRADI closed 11 months ago

EKI-INDRADI commented 11 months ago

remove nvidia-cudnn-cu11==8.5.0.96 from requirements.txt because for linux (manual install cuda ncnn for windows) [solved]

remove nvidia-nccl-cu11==2.14.3 from requirements.txt because for linux (manual install cuda sdk for windows) [solved]

remove triton==2.0.0 from requirements.txt because for linux (i dont know how to fix)

huiyichanmian commented 11 months ago

There is an issue with installing on Windows, you can view this video: https://www.bilibili.com/video/BV1ig4y1f7BQ/?share_source=copy_web&vd_source=842db193ae3e8fc29019a57821e30000

EKI-INDRADI commented 11 months ago

remove nvidia-cudnn-cu11==8.5.0.96 from requirements.txt because for linux (manual install cuda ncnn for windows)

remove nvidia-nccl-cu11==2.14.3 from requirements.txt because for linux (manual install cuda sdk for windows)

remove triton==2.0.0 from requirements.txt because for linux (i dont know how to fix)

CUDA SDK 11.8 + ENV latest already installed CUDA NCNN 8.9+ ENV latest already installed FFMPEG 6 + ENV already installed

create dir pretrained_models git lfs clone https://huggingface.co/zcxu-eric/MagicAnimate git lfs clone https://huggingface.co/stabilityai/sd-vae-ft-mse git lfs clone https://huggingface.co/runwayml/stable-diffusion-v1-5

requirements.txt

absl-py==1.4.0
accelerate==0.22.0
aiofiles==23.2.1
aiohttp==3.8.5
aiosignal==1.3.1
altair==5.0.1
annotated-types==0.5.0
antlr4-python3-runtime==4.9.3
anyio==3.7.1
async-timeout==4.0.3
attrs==23.1.0
cachetools==5.3.1
certifi==2023.7.22
charset-normalizer==3.2.0
click==8.1.7
cmake==3.27.2
contourpy==1.1.0
cycler==0.11.0
datasets==2.14.4
dill==0.3.7
einops==0.6.1
exceptiongroup==1.1.3
fastapi==0.103.0
ffmpy==0.3.1
filelock==3.12.2
fonttools==4.42.1
frozenlist==1.4.0
fsspec==2023.6.0
google-auth==2.22.0
google-auth-oauthlib==1.0.0
gradio==3.41.2
gradio-client==0.5.0
grpcio==1.57.0
h11==0.14.0
httpcore==0.17.3
httpx==0.24.1
huggingface-hub==0.16.4
idna==3.4
importlib-metadata==6.8.0
importlib-resources==6.0.1
jinja2==3.1.2
joblib==1.3.2
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
kiwisolver==1.4.5
lightning-utilities==0.9.0
lit==16.0.6
markdown==3.4.4
markupsafe==2.1.3
matplotlib==3.7.2
mpmath==1.3.0
multidict==6.0.4
multiprocess==0.70.15
networkx==3.1
numpy==1.24.4
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nvtx-cu11==11.7.91
oauthlib==3.2.2
omegaconf==2.3.0
opencv-python==4.8.0.76
orjson==3.9.5
pandas==2.0.3
pillow==9.5.0
pkgutil-resolve-name==1.3.10
protobuf==4.24.2
psutil==5.9.5
pyarrow==13.0.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==2.3.0
pydantic-core==2.6.3
pydub==0.25.1
pyparsing==3.0.9
python-multipart==0.0.6
pytorch-lightning==2.0.7
pytz==2023.3
pyyaml==6.0.1
referencing==0.30.2
regex==2023.8.8
requests==2.31.0
requests-oauthlib==1.3.1
rpds-py==0.9.2
rsa==4.9
safetensors==0.3.3
semantic-version==2.10.0
sniffio==1.3.0
starlette==0.27.0
sympy==1.12
tensorboard==2.14.0
tensorboard-data-server==0.7.1
tokenizers==0.13.3
toolz==0.12.0
torchmetrics==1.1.0
tqdm==4.66.1
transformers==4.32.0
tzdata==2023.3
urllib3==1.26.16
uvicorn==0.23.2
websockets==11.0.3
werkzeug==2.3.7
xxhash==3.3.0
yarl==1.9.2
zipp==3.16.2
decord
imageio==2.9.0
imageio-ffmpeg==0.4.3
timm
scipy
scikit-image
av
imgaug
lpips
ffmpeg-python
torch==2.0.1
torchvision==0.15.2
xformers==0.0.22
diffusers==0.21.4
conda create -n m_animate python=3.10 -y
conda activate m_animate
pip install --upgrade pip
pip install -r requirements.txt

pip uninstall torch
pip install torch==2.0.1 --index-url https://download.pytorch.org/whl/cu118

python -m magicanimate.pipelines.animation --config configs\prompts\animation.yaml
python -m magicanimate.pipelines.animation --config configs\prompts\animation.yaml --dist

ERROR TRITON & MEMORY ALLOCATION (RTX 3080 10GB)


(m_animate) V:\_ANIMATION\MAGIC_ANIMATE\magic-animate>python -m magicanimate.pipelines.animation --config configs\prompts\animation.yaml
V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\pipelines\pipeline_animation.py:43: FutureWarning: Importing `DiffusionPipeline` or `ImagePipelineOutput` from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead.
  from diffusers.pipeline_utils import DiffusionPipeline
loaded temporal unet's pretrained weights from pretrained_models/stable-diffusion-v1-5\unet ...
### missing keys: 560;
### unexpected keys: 0;
### Temporal Module Parameters: 417.1376 M
The config attributes {'addition_embed_type': None, 'addition_embed_type_num_heads': 64, 'addition_time_embed_dim': None, 'conditioning_channels': 3, 'encoder_hid_dim': None, 'encoder_hid_dim_type': None, 'global_pool_conditions': False, 'num_attention_heads': None, 'transformer_layers_per_block': 1} were passed to ControlNetModel, but are not expected and will be ignored. Please verify your config.json configuration file.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
It is recommended to provide `attention_head_dim` when calling `get_down_block`. Defaulting `attention_head_dim` to 8.
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\pipelines\pipeline_animation.py:103: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
  "_class_name": "DDIMScheduler",
  "_diffusers_version": "0.21.4",
  "beta_end": 0.012,
  "beta_schedule": "linear",
  "beta_start": 0.00085,
  "clip_sample": true,
  "clip_sample_range": 1.0,
  "dynamic_thresholding_ratio": 0.995,
  "num_train_timesteps": 1000,
  "prediction_type": "epsilon",
  "rescale_betas_zero_snr": false,
  "sample_max_value": 1.0,
  "set_alpha_to_one": true,
  "steps_offset": 0,
  "thresholding": false,
  "timestep_spacing": "leading",
  "trained_betas": null
}
 is outdated. `steps_offset` should be set to 1 instead of 0. Please make sure to update the config accordingly as leaving `steps_offset` might led to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\pipelines\pipeline_animation.py:116: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
  "_class_name": "DDIMScheduler",
  "_diffusers_version": "0.21.4",
  "beta_end": 0.012,
  "beta_schedule": "linear",
  "beta_start": 0.00085,
  "clip_sample": true,
  "clip_sample_range": 1.0,
  "dynamic_thresholding_ratio": 0.995,
  "num_train_timesteps": 1000,
  "prediction_type": "epsilon",
  "rescale_betas_zero_snr": false,
  "sample_max_value": 1.0,
  "set_alpha_to_one": true,
  "steps_offset": 1,
  "thresholding": false,
  "timestep_spacing": "leading",
  "trained_betas": null
}
 has not set the configuration `clip_sample`. `clip_sample` should be set to False in the configuration file. Please make sure to update the config accordingly as not setting `clip_sample` in the config might lead to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
  0%|                                                                                                                                         | 0/6 [00:00<?, ?it/s]current seed: 1
V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\pipelines\pipeline_animation.py:624: FutureWarning: Accessing config attribute `in_channels` directly via 'UNet3DConditionModel' object attribute is deprecated. Please access 'in_channels' over 'UNet3DConditionModel's config object instead, e.g. 'unet.config.in_channels'.
  num_channels_latents = self.unet.in_channels
  4%|█████                                                                                                                           | 1/25 [00:14<05:45, 14.41s/it]
  0%|                                                                                                                                         | 0/6 [00:19<?, ?it/s]
Traceback (most recent call last):
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\pipelines\animation.py", line 282, in <module>
    run(args)
  File "V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\pipelines\animation.py", line 271, in run
    main(args)
  File "V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\pipelines\animation.py", line 197, in main
    sample = pipeline(
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\pipelines\pipeline_animation.py", line 738, in __call__
    pred = self.unet(
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\models\unet_controlnet.py", line 462, in forward
    sample = upsample_block(
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\models\unet_3d_blocks.py", line 653, in forward
    hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\models\attention.py", line 136, in forward
    hidden_states = block(
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "V:\_ANIMATION\MAGIC_ANIMATE\magic-animate\magicanimate\models\mutual_self_attention.py", line 272, in hacked_basic_transformer_inner_forward
    hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\diffusers\models\attention.py", line 307, in forward
    hidden_states = module(hidden_states, scale)
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\NGXCRYPT-2ND\.conda\envs\m_animate\lib\site-packages\diffusers\models\attention.py", line 356, in forward
    return hidden_states * self.gelu(gate)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 320.00 MiB (GPU 0; 10.00 GiB total capacity; 8.34 GiB already allocated; 0 bytes free; 9.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

(m_animate) V:\_ANIMATION\MAGIC_ANIMATE\magic-animate>
EKI-INDRADI commented 11 months ago

There is an issue with installing on Windows, you can view this video: https://www.bilibili.com/video/BV1ig4y1f7BQ/?share_source=copy_web&vd_source=842db193ae3e8fc29019a57821e30000

thanks for sharing . . .but resolution video very low

update . . . success create account & 1080p (effort language create account)

update still error https://github.com/sdbds/magic-animate-for-windows/issues/1

EKI-INDRADI commented 11 months ago

solved https://github.com/EKI-INDRADI/magic-animate-for-windows-rnd-20231206