Closed AbyssBadger0 closed 1 year ago
How about the reconstruction quality? Can you show the link of the original videos?
This is the original video https://github.com/qiuyu96/CoDeF/assets/125934639/e3349b8b-2551-4273-8def-2dc8479ba589 This is the the reconstruction https://github.com/qiuyu96/CoDeF/assets/125934639/a4f94bbf-2ddc-4b5a-9197-2a33a9844fc0
It appears that the reconstruction of the foreground object is not as expected (which is strange). I would like to provide a few suggestions that could potentially address this issue:
It's also worth considering that the motion in the video clip may be too rapid for the temporal grid to accurately capture it.
I tried a video with slightly smaller character movements again, and the effect was better than before, but there were always these floating textures that I don't know what they are controlnet uses lineart and openpose I compressed the video in order to upload it
https://github.com/qiuyu96/CoDeF/assets/125934639/16131473-7c8a-438b-8c84-9869ff28042d
As noted in the Discussion section of our paper, the current method may not perform optimally for long sequences that involve significant deformation. This is because such sequences might require multiple canonical images, a feature that has not been implemented in the current version of our method. The default parameters are also designed for around 100 video frames.
For shorter video clips, however, our method should produce satisfactory results with proper parameters(e.g. annealed step, mlp size and so on). This is demonstrated as follows:
https://github.com/qiuyu96/CoDeF/assets/11994361/ebe8f191-a296-4443-89c4-af59fba589d0
We are actively working towards enhancing the method to handle longer sequences and larger deformations especially for human. Please stay tuned.
Is there anyway to add multiple canonical images?
if use flow, must i add flow_dir in config file?
I found the reconstruction result not much different, no matter using python train.py
with or without '--flow_dir'.
@LyazS Yes. There are different designs to use multiple canonical spaces such as HyperNerf.
@xpeng The flow is optional for training. The image quality with or without flow is quite similar. But the video reconstructed with flow contains less flickering.
@xpeng The flow is optional for training. The image quality with or without flow is quite similar. But the video reconstructed with flow contains less flickering.
thanks for attention, i will experiment more
Thank you for your reply and answer. I will continue to follow and try!
As noted in the Discussion section of our paper, the current method may not perform optimally for long sequences that involve significant deformation. This is because such sequences might require multiple canonical images, a feature that has not been implemented in the current version of our method. The default parameters are also designed for around 100 video frames.
For shorter video clips, however, our method should produce satisfactory results with proper parameters(e.g. annealed step, mlp size and so on). This is demonstrated as follows:
issue_translated.mp4 We are actively working towards enhancing the method to handle longer sequences and larger deformations especially for human. Please stay tuned.
about mlp size, Where is mlp size set, not found,is it here, which parameter should be set
@zhanghongyong123456 The mlp size for hash is in config.json. The hyperparameter here is for positional encoding (in case Hash is not adopted.)
@zhanghongyong123456 The mlp size for hash is in config.json. The hyperparameter here is for positional encoding (in case Hash is not adopted.)
How to modify mlp parameters can improve the reconstruction effect
Hello author, your project presentation was impressive. Using your instance training set to redraw the video was great, but I trained and generated my own video with poor results. Could you please help me see what the reason is? My training parameter settings are as follows My picture looks like this The content of the video is that the character is jumping and moving. The model used after training is the last one.But the canonical imageI got is like this. This also led to poor video effects in the subsequent rendering