qiuyu96 / CoDeF

[CVPR 2024 Highlight] Official PyTorch implementation of CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
https://qiuyu96.github.io/CoDeF/
Other
4.83k stars 385 forks source link

My canonical image generation is so bad. May I ask where I made a mistake #31

Closed AbyssBadger0 closed 1 year ago

AbyssBadger0 commented 1 year ago

Hello author, your project presentation was impressive. Using your instance training set to redraw the video was great, but I trained and generated my own video with poor results. Could you please help me see what the reason is? My training parameter settings are as follows image My picture looks like this 0001 The content of the video is that the character is jumping and moving. The model used after training is the last one.But the canonical imageI got is like this. canonical_0 This also led to poor video effects in the subsequent rendering

ken-ouyang commented 1 year ago

How about the reconstruction quality? Can you show the link of the original videos?

AbyssBadger0 commented 1 year ago

This is the original video https://github.com/qiuyu96/CoDeF/assets/125934639/e3349b8b-2551-4273-8def-2dc8479ba589 This is the the reconstruction https://github.com/qiuyu96/CoDeF/assets/125934639/a4f94bbf-2ddc-4b5a-9197-2a33a9844fc0

ken-ouyang commented 1 year ago

It appears that the reconstruction of the foreground object is not as expected (which is strange). I would like to provide a few suggestions that could potentially address this issue:

  1. Consider using grouped deformation fields, such as the approach used in Sam-Track, to initially segment the object. This method might lead to better isolation and therefore, improved reconstruction of the foreground object.
  2. Another option could be to increase the annealing step. This might allow for a more accurate and detailed reconstruction by gradually refining the model's approximation.
  3. For validation purposes, starting with a shorter video clip might be beneficial.

It's also worth considering that the motion in the video clip may be too rapid for the temporal grid to accurately capture it.

AbyssBadger0 commented 1 year ago

I tried a video with slightly smaller character movements again, and the effect was better than before, but there were always these floating textures that I don't know what they are controlnet uses lineart and openpose 04844-3224275959-bcxyzw (style), _lora_bcxyzw_0 5_ I compressed the video in order to upload it

https://github.com/qiuyu96/CoDeF/assets/125934639/16131473-7c8a-438b-8c84-9869ff28042d

ken-ouyang commented 1 year ago

As noted in the Discussion section of our paper, the current method may not perform optimally for long sequences that involve significant deformation. This is because such sequences might require multiple canonical images, a feature that has not been implemented in the current version of our method. The default parameters are also designed for around 100 video frames.

For shorter video clips, however, our method should produce satisfactory results with proper parameters(e.g. annealed step, mlp size and so on). This is demonstrated as follows:

https://github.com/qiuyu96/CoDeF/assets/11994361/ebe8f191-a296-4443-89c4-af59fba589d0

We are actively working towards enhancing the method to handle longer sequences and larger deformations especially for human. Please stay tuned.

LyazS commented 1 year ago

Is there anyway to add multiple canonical images?

xpeng commented 1 year ago

if use flow, must i add flow_dir in config file? I found the reconstruction result not much different, no matter using python train.py with or without '--flow_dir'.

ken-ouyang commented 1 year ago

@LyazS Yes. There are different designs to use multiple canonical spaces such as HyperNerf.

ken-ouyang commented 1 year ago

@xpeng The flow is optional for training. The image quality with or without flow is quite similar. But the video reconstructed with flow contains less flickering.

xpeng commented 1 year ago

@xpeng The flow is optional for training. The image quality with or without flow is quite similar. But the video reconstructed with flow contains less flickering.

thanks for attention, i will experiment more

AbyssBadger0 commented 1 year ago

Thank you for your reply and answer. I will continue to follow and try!

zhanghongyong123456 commented 1 year ago

As noted in the Discussion section of our paper, the current method may not perform optimally for long sequences that involve significant deformation. This is because such sequences might require multiple canonical images, a feature that has not been implemented in the current version of our method. The default parameters are also designed for around 100 video frames.

For shorter video clips, however, our method should produce satisfactory results with proper parameters(e.g. annealed step, mlp size and so on). This is demonstrated as follows:

issue_translated.mp4 We are actively working towards enhancing the method to handle longer sequences and larger deformations especially for human. Please stay tuned.

about mlp size, Where is mlp size set, not found,is it here, which parameter should be set image

ken-ouyang commented 1 year ago

@zhanghongyong123456 The mlp size for hash is in config.json. The hyperparameter here is for positional encoding (in case Hash is not adopted.)

zhanghongyong123456 commented 1 year ago

@zhanghongyong123456 The mlp size for hash is in config.json. The hyperparameter here is for positional encoding (in case Hash is not adopted.)

How to modify mlp parameters can improve the reconstruction effect image