ali-vilab / UniAnimate

Code for Paper "UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation".
https://unianimate.github.io/
524 stars 29 forks source link

About the installation in windows, using powershell and miniconda3 #11

Open zephirusgit opened 2 weeks ago

zephirusgit commented 2 weeks ago

I always find that I am missing many things in the requirements, in this case also that it uses nccl, for multiple GPUs, which is not yet available in Windows, (I did not try in WSL due to space issues, and I don't have any more GPUs either. ), with which gpt4 recommended that I deactivate it, returning to the installation, adding more things that I think are necessary, that are not in the description, I consulted with gpt4 to see what each thing corresponded to, finally it worked, Although with my 12GB card, (I see that it uses 21GB shared but it drags, it is very slow) I did not see it move from 0% although I see it processing, I share my notes in case anyone else encountered several errors when trying to launch the inference, I'm going to try to see if I can change something so that it doesn't use so much vram, to see if it becomes usable,


unanimate

git clone https://github.com/ali-vilab/UniAnimate.git cd UniAnimate conda create -n UniAnimate python=3.9 conda activate UniAnimate conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia pip install -r requirements.txt

pip install modelscope

(create modeldownloader.py) from modelscope.hub.snapshot_download import snapshot_download model_dir = snapshot_download('iic/unianimate', cache_dir='checkpoints/')

mv ./checkpoints/iic/unianimate/* ./checkpoints/

pip install opencv-python

https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/

pip install --upgrade --quiet langchain-experimental pip install --upgrade --quiet pillow open_clip_torch torch matplotlib

Of course, everyone should see their version of Cuda.

pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118 pip install rotary-embedding-torch pip install fairscale pip install nvidia-ml-py3 pip install easydict pip install imageio pip install pytorch-lightning pip install args conda install -c conda-forge pynvml

(Edit inference_unianimate_entrance.py) and change nccl to gloo

dist.init_process_group(backend='gloo', world_size=cfg.world_size, rank=cfg.rank)

python inference.py --cfg configs/UniAnimate_infer.yaml

EKI-INDRADI commented 1 day ago

SOLVED thanks