HVision-NKU / StoryDiffusion

Create Magic Story!
Apache License 2.0
5.45k stars 519 forks source link

About some details of paper #70

Closed YahooKID closed 1 month ago

YahooKID commented 1 month ago

Thanks for open source such a great model, here is a little my own problem about some details of paper,

  1. About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
  2. About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?

Cheers

brentjohnston commented 1 month ago

Crickets for some reason, I'd like to know also.

zhoudaquan commented 1 month ago

Thanks for open source such a great model, here is a little my own problem about some details of paper,

  1. About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
  2. About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?

Cheers

Hi,

Thank you for your interest in the work. 1. we train the motion predictor together with the motion module taken from AnimateDiff. Both modules are trainable. 2. Please refer to this repo for the watermark removal on WebVid dataset: https://github.com/RoundofThree/python-scripts/blob/1f9455ce9f5832883e1002e73934afa4099a097e/watermark_removal/watermark_remover.py#L188

Regards, Zhou Daquan

YahooKID commented 1 month ago

Thanks for open source such a great model, here is a little my own problem about some details of paper,

  1. About the training stage of transition video generation model, did you freeze the Motion Modeling Module which is from AnimateDiff, or SFT this module as well as training Semantic Space Motion Predictor(Transformer Block Part)?
  2. About Training dataset Webvid-10M, as my known, almost all of video data in this dataset have the similar watermarks with the similar position, with my limited knowledge, these watermarks with similar features will influence the capability of model. If you take any preprocess could you share it with me?

Cheers

Hi,

Thank you for your interest in the work. 1. we train the motion predictor together with the motion module taken from AnimateDiff. Both modules are trainable. 2. Please refer to this repo for the watermark removal on WebVid dataset: https://github.com/RoundofThree/python-scripts/blob/1f9455ce9f5832883e1002e73934afa4099a097e/watermark_removal/watermark_remover.py#L188

Regards, Zhou Daquan

thanks.

armored-guitar commented 1 month ago

@zhoudaquan Hi. Thank you for your great work! I try to reproduce your code. Can you please help me to clear some details about your work: Do you use consistent self-attention for video training? At the 6th page there is a picture with architecture. There said that you compress image (2xHxWx3) into a semantic space 2xNxC, What is n? 257 (clip output) or 1 (linear projection) What is sequence length for motion transformer? If it is FxN, what is N?

Looking forward for your answer

Z-YuPeng commented 1 month ago

We encode a single image as N token vectors to represent different semantic information. Then we perform prediction. Thus, each intermediate frame corresponds to N tokens in the semantic space.