yoyo-nb / Thin-Plate-Spline-Motion-Model

[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
MIT License
3.39k stars 555 forks source link

produce smooth video results #41

Open congweiw opened 1 year ago

congweiw commented 1 year ago

Hello, I'm a little confused. Your method is ultimately about processing video, which should be converted frame by frame. But why can such methods produce relatively smooth video results without considering the temporal consistency of the video? Or why not model temporal consistency.

GuruVirus commented 1 year ago

I don't know much, but I've been reading through the code and:

In the demo colab it uses matplotlib.animation.ArtistAnimation with the interval parameter set to 50 which specifies the time between frames at 50ms. https://matplotlib.org/stable/api/_as_gen/matplotlib.animation.ArtistAnimation.html#matplotlib.animation.ArtistAnimation

Also at the end of the demo they save the output with this command: imageio.mimsave(output_video_path, [img_as_ubyte(frame) for frame in predictions], fps=fps) Where 'fps' was previously defined by the source video:

reader = imageio.get_reader(driving_video_path)
fps = reader.get_meta_data()['fps']
congweiw commented 1 year ago

Do you mean saving a list of predicted frames will smooth the result? But I'm still confused that this method should be frame-by-frame prediction and doesn't consider the relationship between the current and previous frames, e.g., temporal consistency.

GuruVirus commented 1 year ago

I gave you some facts from the code.

My opinion is that your assumptions are wrong. It appears to process it frame by frame and preserves the original video's frames per second so the output looks identical in composition, play speed, and smoothness.

congweiw commented 1 year ago

Maybe you are right. The pose transfer capability of this method is also sufficient.

FurkanGozukara commented 1 year ago

I don't know much, but I've been reading through the code and:

In the demo colab it uses matplotlib.animation.ArtistAnimation with the interval parameter set to 50 which specifies the time between frames at 50ms. https://matplotlib.org/stable/api/_as_gen/matplotlib.animation.ArtistAnimation.html#matplotlib.animation.ArtistAnimation

Also at the end of the demo they save the output with this command: imageio.mimsave(output_video_path, [img_as_ubyte(frame) for frame in predictions], fps=fps) Where 'fps' was previously defined by the source video:

reader = imageio.get_reader(driving_video_path)
fps = reader.get_meta_data()['fps']

hello. do you know how we can increase output resolution?

GuruVirus commented 1 year ago

I'm not familiar with this code base, but you could do something similar to Stable Diffusion (the Auto1111 webui has upscalers included) and use an upscaler to take each frame and increase its size before moving on to the next frame. You could increase the size of the image by up to 4X (at the cost of VRAM and rendering time).

FurkanGozukara commented 1 year ago

I'm not familiar with this code base, but you could do something similar to Stable Diffusion (the Auto1111 webui has upscalers included) and use an upscaler to take each frame and increase its size before moving on to the next frame. You could increase the size of the image by up to 4X (at the cost of VRAM and rendering time).

Yes this is on my mind

hacker009-sudo commented 8 months ago

@FurkanGozukara , are you able to increase the resolution if yes please guide how to do that?

FurkanGozukara commented 8 months ago

@FurkanGozukara , are you able to increase the resolution if yes please guide how to do that?

no i couldn't do it