I have a question regarding your test script.
In Figure 2 (the test phase), it seems like the video is going to be entirely generated by having one input image.
In the code, however, you compute the trajectory using the ground-truth video.
Is there some point that I am missing? And is generating a video having only one fixed image possible?
(I know that for training you would necessarily need a video sequence, what I mean is at test time.)
Hi, Thanks for the cool work.
I have a question regarding your test script. In Figure 2 (the test phase), it seems like the video is going to be entirely generated by having one input image. In the code, however, you compute the trajectory using the ground-truth video.
https://github.com/zlai0/VideoAutoencoder/blob/dc1aa14cde7da8c70e84f8cf7d4cc572a5ad9ed4/test_re10k.py#L102-L103
Is there some point that I am missing? And is generating a video having only one fixed image possible? (I know that for training you would necessarily need a video sequence, what I mean is at test time.)
Thanks.