Official Pytorch implementation for MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.
I ran MOFA-Video-Traj in a PyTorch NGC Container.
I followed the demo's detailed instructions to set the trajectories. After initiating the demo, it processed for 10 hours.
While running, the demo only utilized 7GB of memory and 1 CPU, despite having access to more resources.
When the Stable Diffusion process reached 100%, it didn't output any video.
I ran MOFA-Video-Traj in a PyTorch NGC Container. I followed the demo's detailed instructions to set the trajectories. After initiating the demo, it processed for 10 hours. While running, the demo only utilized 7GB of memory and 1 CPU, despite having access to more resources. When the Stable Diffusion process reached 100%, it didn't output any video.