Closed piyushK52 closed 4 months ago
It appears that there was an issue with our .gitignore file, which has now been resolved 😂. Thank you for your feedback.
Thanks for the response. Yeah it seems to be fixed. Can you give an estimate on how much vram it requires? I am running the fox example but even on a rtx 4090 (24gb vram) it is going out of memory
Thanks for your feedback. When generating videos configured at 16 512 512, the peak GPU memory usage of MotionClone is approximately 30+ GB. You might consider reducing the video resolution to run it on your GPU. (We have updated motionclone/pipelines/additional_components.py
to support lower resolutions like 320 * 320)
That is very helpful, thanks. Btw I was able to make the generations work on A100 (80 GB). Closing this issue.
Thanks for the response. Yeah it seems to be fixed. Can you give an estimate on how much vram it requires? I am running the fox example but even on a rtx 4090 (24gb vram) it is going out of memory
Sorry for being late. We have updated the code. Now MotionClone is able to 1) directly performs motion customization without cumbersome video inversion ; 2) significantly reduces video memory consumption. In our experiments, For 16×512×512 text-to-video, the memory consumption is about 14GB . For MotionClone combined with image-to-video and sketch-to-video, the memory is about 22 GB. Hope this helps.
I am trying to run the provided code, but a couple of files seem to be missing. In invert.py and pipeline.py you guys are importing UNet3DConditionModel and SparseControlNetModel from motionclone.models, but motionclone/models directory doesn't exist. Please cross-check the code to fix this issue or update the readme (in case there is a missing step) .