Open alfredplpl opened 2 months ago
I would like to v2v by your model. I think we need to add two points on opensora/sample/pipeline_videogen.py.
Create the encode_videos function like the follow:
def encode_videos(self, videos): latents = self.vae.encode(videos) ... return latents
Add code on the prepare_latents function so that the function copes with noised latents.
Any idea?
Very creative idea. We can make further attempts. But I believe that before that, our model needs to be generalised enough, which means a lot of data to train.
I would like to v2v by your model. I think we need to add two points on opensora/sample/pipeline_videogen.py.
Create the encode_videos function like the follow:
Add code on the prepare_latents function so that the function copes with noised latents.
Any idea?