RERV / VDT

[ICLR2024] The official implementation of paper "VDT: General-purpose Video Diffusion Transformers via Mask Modeling", by Haoyu Lu, Guoxing Yang, Nanyi Fei, Yuqi Huo, Zhiwu Lu, Ping Luo, Mingyu Ding.
Other
206 stars 10 forks source link

Physion inference with less than 8 condition frames #9

Open aweitz opened 7 months ago

aweitz commented 7 months ago

Congratulations on the impressive paper. When I tried running inference with your pre-trained physion model, the results significantly degraded as the number of condition frames was reduced. For example, using 4 condition frames (rather than the default of 8) produces only noise - see attached image.

Does this match your expectation? It seems at odds with the paper's discussion, which states "our VDT can still take any length of conditional frame as input and output consistent predicted features".

Thank you!

Edit: I see in Figure 8 that you tried using more than 8 conditional frames, but not less. Do you have a sense how well the forward prediction can perform with only 1 conditioning frame using VDT? Would the model need to be trained with only 1 conditioning frame?

download

RERV commented 7 months ago

Thank you for your interest in our VDT. The observed decrease in performance when fewer frames are used as a condition stems from the fact that our released model was only trained with a fixed number of frames (8 frames) for conditioning. And we have discovered that it is feasible to extend the model's capabilities to conditions involving more than 8 frames, as demonstrated in Appendix Figure 8. The term "any length" may have been somewhat ambiguous; it specifically refers to any quantity exceeding 8 frames. We have made revisions to clarify this point. For second question "Would the model need to be trained with only 1 conditioning frame", I think you could try the application of our unified mask modeling, than VDT is capable of performing zero-shot extrapolation across any spatial-temporal dimension.

aweitz commented 6 months ago

Thank you for the detailed response!

I think you could try the application of our unified mask modeling, than VDT is capable of performing zero-shot extrapolation across any spatial-temporal dimension.

To clarify, the unified mask approach must be applied during training to enable the various tasks (including zero-shot extrapolation) during inference, correct? Do you plan to release details on how you modulated the spatial-temporal mask (e.g. probability of frame dropout, etc.)?

aweitz commented 6 months ago

Additionally, can you please clarify if the unified mask modeling is applied in the image or latent (VAE) space? It seems your notation suggests it is applied in the image domain (${\mathcal{M} \in R^{F \times H \times W \times C}}$). Thank you!

RERV commented 5 months ago

Hi everyone, my apologies for the late reply. I was quite busy earlier and couldn't get to it. I've now updated the mask modeling, and you can find the necessary code in it. Have fun!