Open Jwoo5 opened 2 years ago
Great paper explanation, only couple of points might be improved:
Just a note, which might be helpful. It is possible to resize images with
<img src="https://github.com/favicon.ico" width="48">
This review is good to read. Dynamic NeRF suggests adding a time domain in NeRF. So, it can produce a video from any other viewpoint.
I have a question about it. Since it adds one more dimension, I think it requires more training data for training. How much amount of data is required compared to the original NeRF?
I wish the size of the figures to be bigger.
Thank you for introducing a good paper.
This review is good to read. Dynamic NeRF suggests adding a time domain in NeRF. So, it can produce a video from any other viewpoint.
I have a question about it. Since it adds one more dimension, I think it requires more training data for training. How much amount of data is required compared to the original NeRF?
I wish the size of the figures to be bigger.
Thank you for introducing a good paper.
Hi, Thank you for your question! The paper used 50-150 training images per object. It seems that the original NeRF paper used similar sized datasets. I think that the usage of two separate neural networks in D-NeRF makes it so that the extra dimension doesn't necessarily rise the need of images. It's also possible that the original NeRF used unnecessarily many pictures in training. But you should also note that the movements they achieved in the paper were very simple, and using the technology to generate video of, let's say, parkour would require a much, much bigger dataset, and it would be very difficult, I think.
Well-explained, but illustration about model processing may improve the review. Also, it would be better if the qualitative result figures are bigger than now.