google / hypernerf

Code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".
https://hypernerf.github.io
Apache License 2.0
890 stars 106 forks source link

There is no rendered video when run locally. #30

Open mah-asd opened 2 years ago

mah-asd commented 2 years ago

Hello everyone, Can someone help me to run this code locally on my computer? I run train.py and eval.py for my own dataset and my render folder is empty. actually there is no code for render video in eval.py!

wangrun20 commented 2 years ago

Yes, eval.py does not render video at all.

If you want to render video, you should go to the CoLab and find the Jupyter Notebook, whose website is https://colab.research.google.com/github/google/hypernerf/blob/main/notebooks/HyperNeRF_Render_Video.ipynb

But there are some difficuties when I tried to run it on the CoLab online. So I download it and rewrote it into python file by my self. Then I rendered my video on my local mechine successfully.

If you have any trouble when running HyperNeRF, feel free to communicate.

Zvyozdo4ka commented 8 months ago

@wangrun20 somehow i rendered video, but the output looks like incapable of representing my face. Could you recommend how can i fix it? Is there a specific guideline how to take video?

https://github.com/google/hypernerf/assets/74532816/1cf40c4e-ff6e-4e7b-bfcd-1860d7f63eb0

wangrun20 commented 8 months ago

In my experiment, although I was unable to achieve the reconstruction performance claimed by the paper's authors, I also gained some insights. It is best for the input video to have a clean, clear background, because the HyperNeRF code first preprocesses the video with COLMAP, matching feature points of video frames. If the background is too cluttered, it might lead to incorrect positioning, and consequently, the reconstruction results would also be incorrect.