Open aduchon opened 1 year ago
If anyone is interested, this what I had to do to get it to work in Google Colab. I needed high-RAM GPUs. There might be some extra things in here too. I followed along this guy's PC-oriented tutorials. https://www.youtube.com/watch?v=IxoXq9PiPis
I still get this warning:
tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
but it doesn't seem to matter.I first git clone it.
new_install = False #@param{type:"boolean"} %cd {BASE_PATH} # e.g., /content/drive/MyDrive/AI/AnimateDiff if new_install: # only run once as true !git clone https://github.com/s9roll7/animatediff-cli-prompt-travel.git %cd animatediff-cli-prompt-travel
- copy https://huggingface.co/guoyww/animatediff/tree/main mm_sd_v15_v2.ckpt to ./data/models/motion-module
- copy a 1.5 stable diffusion .safetensors model into ./data/models/sd
- download or add shortcuts to already downloaded ip_adapter models https://github.com/tencent-ailab/IP-Adapter into data/models/ip_adapter/models
- make sure the ip_adapter image filenames match the frame number of the prompts
Then install everything each time you start up the colab.
#@title installs !pip install -q torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 !pip install -q tensorrt !pip install -q xformers imageio !pip install -q controlnet_aux !pip install -q transformers !pip install -q mediapipe onnxruntime !pip install -q omegaconf !pip install ffmpeg-python # have to use 0.18.1 to avoid error: ImportError: cannot import name 'maybe_allow_in_graph' from 'diffusers.utils' (/usr/local/lib/python3.10/dist-packages/diffusers/utils/__init__.py) !pip install -q diffusers[torch]==0.18.1 # wherever you have it set up: %set_env PYTHONPATH=/content/drive/MyDrive/AI/AnimateDiff/animatediff-cli-prompt-travel/src # unclear why it's using the diffusers load and not the internal one # https://github.com/guoyww/AnimateDiff/issues/57 # have to edit after pip install: # /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py#790 # to text_model.load_state_dict(text_model_dict, strict=False) !sed -i 's/text_model.load_state_dict(text_model_dict)/text_model.load_state_dict(text_model_dict, strict=False)/g' /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py
Thanks, but how do you actually run the generation itself? I tried !python src/animatediff/generate.py -c config/prompts/prompt_travel.json -W 256 -H 384 -L 128 -C 16 and it did get up to the tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
warning you mentioned but stopped there.
I'm stuck too. Can you share your Colab notebook?
Did you %set_env PYTHONPATH=/content/drive/MyDrive/AI/AnimateDiff/animatediff-cli-prompt-travel/src
Then it will be in your python path. And make sure you are running it from %cd animatediff-cli-prompt-travel
Then
!python -m animatediff generate -c config/prompts/your-config.json -W 768 -H 432 -L 90 -C 16
I did notice that there is not a lot of logging, so it's hard to tell what's it doing (or where it stops). Maybe the authors could add more of that. I don't have time right now to do a full fork of this repo. To see what was happening with IP Adapter I had to add a bunch of print statements, because it will just silently fail.
Did you
%set_env PYTHONPATH=/content/drive/MyDrive/AI/AnimateDiff/animatediff-cli-prompt-travel/src
Then it will be in your python path. And make sure you are running it from%cd animatediff-cli-prompt-travel
Then!python -m animatediff generate -c config/prompts/your-config.json -W 768 -H 432 -L 90 -C 16
I did notice that there is not a lot of logging, so it's hard to tell what's it doing (or where it stops). Maybe the authors could add more of that. I don't have time right now to do a full fork of this repo. To see what was happening with IP Adapter I had to add a bunch of print statements, because it will just silently fail.
It worked! Just not a coder and didn't know to use -m lol Thank you!
awesome.
@s9roll7 @aduchon Have a question for you guys. in the default repo, venv is used, and the best my noncoding mind can infer is that packages are downloaded into venv\Lib\site-packages and used during venv. In colab, I set the working env to /src and every time I use !python -m animatediff generate -c
the necessary model(s?) get downloaded again. Is there a way to place the downloaded files from venv in /src such that when I use the colab I don't have to download stuff again?
For common models used across different projects, you can add this at the top of your colab. So at least the HuggingFace ones are only downloaded once. The other models have to be in specific places for this particular colab, but you can make links in your gdrive and put the link in the models directories like I mention above.
# so we only download once, store them in gdrive
import os
huggingface_path = MYDRIVE_PATH / "AI/huggingface"
os.makedirs(huggingface_path, exist_ok=True)
print(f"huggingface_path: {huggingface_path}")
# so models get stored here
os.environ['TRANSFORMERS_CACHE'] = str(huggingface_path / "models")
os.environ['HF_HOME'] = str(huggingface_path / "home")
os.environ['HF_DATASETS_CACHE'] = str(huggingface_path / "datasets")
For anyone following along, torch got a big update which was picked up by colab, so now the installs looks like this
!pip install -q torch==2.0.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
!pip install -q tensorrt
!pip install -q xformers==0.0.22 imageio
!pip install -q controlnet_aux
!pip install -q transformers
!pip install -q mediapipe onnxruntime
!pip install -q omegaconf
!pip install -q ffmpeg-python
# have to use 0.18.1 to avoid error: ImportError: cannot import name 'maybe_allow_in_graph' from 'diffusers.utils' (/usr/local/lib/python3.10/dist-packages/diffusers/utils/__init__.py)
!pip install -q diffusers[torch]==0.18.1
%set_env PYTHONPATH=/content/drive/MyDrive/AI/AnimateDiff/animatediff-cli-prompt-travel/src
# unclear why it's using the diffusers load and not the internal one
# https://github.com/guoyww/AnimateDiff/issues/57
# have to edit after pip install:
# /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py#790
# to text_model.load_state_dict(text_model_dict, strict=False)
!sed -i 's/text_model.load_state_dict(text_model_dict)/text_model.load_state_dict(text_model_dict, strict=False)/g' /usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py
If anyone is interested, this what I had to do to get it to work in Google Colab. I needed high-RAM GPUs. There might be some extra things in here too. I followed along this guy's PC-oriented tutorials. https://www.youtube.com/watch?v=IxoXq9PiPis
I still get this warning:
tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
but it doesn't seem to matter.I first git clone it.
Then install everything each time you start up the colab.