zju3dv / ENeRF

SIGGRAPH Asia 2022: Code for "Efficient Neural Radiance Fields for Interactive Free-viewpoint Video"
https://zju3dv.github.io/enerf
Other
414 stars 28 forks source link

About video on the website #10

Closed mct1224 closed 1 year ago

mct1224 commented 1 year ago

Hi Haotong and Sida,

awesome work! I believe many are as impressed as I am. My question is about the experimental setting for the video on your website since it's not mentioned in the main paper. I wonder: (1) how many cameras are you using? (2) what are the training and testing splits? e.g. are testing done on completely new videos? Are the training data any similar to the test videos? etc. (3) Are these generated with finetuning?

Thank you very much for your awesome work!

haotongl commented 1 year ago

Thanks for your attention.

  1. The outdoor sequence was captured by 18 cameras (covering an area of about 120 degrees).
  2. We use 18 cameras to train that model. Training data and rendered videos are similar. The rendered video are all novel views.
  3. Yes. These are generated with finetuning.