facebookresearch / hyperreel

Code release for HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling
MIT License
475 stars 34 forks source link

How to run your own data set? #1

Open 1zgh opened 1 year ago

1zgh commented 1 year ago

Very good job! How to run your own data set? colmap? Thanks!!

benattal commented 1 year ago

Yep that's right! You can follow the instructions within the LLFF readme to extract poses from your own dataset. You may also need to create your own dataset config, starting by copying conf/experiment/dataset/llff_large.yaml.

Additionally, if your scene is not forward facing, you may need to modify an existing model configuration so that it works for non-forward facing scenes. Let me know if this is the case for you, and I can try to put together some more detailed instructions for creating/modifying model configuration files. I'm also happy to walk you through the process here.

1zgh commented 1 year ago

Thank you very much! 1、Yes, my scene is not forward facing. The scenes in my data set are shot in all directions. Which configuration files should I modify? 2、In addition, I have noticed that the llff dataset contains images that are subsampled and run out using colmap. But the original colmap does not produce files such as the poses_bounds.npy file. Do you have your own colmap script for everyone to use? thank you.

ZhenyanSun commented 1 year ago

Thank you very much! 1、Yes, my scene is not forward facing. The scenes in my data set are shot in all directions. Which configuration files should I modify? 2、In addition, I have noticed that the llff dataset contains images that are subsampled and run out using colmap. But the original colmap does not produce files such as the poses_bounds.npy file. Do you have your own colmap script for everyone to use? thank you.

Thanks for your question, and I also want to run on my own dataset, could you share more about what should I do?

benattal commented 1 year ago

Sorry, I provided a link to the wrong set of instructions above. Please take a look at this link from the NeRF codebase which provides instructions for a script that invokes colmap, and extracts the poses_bounds.py file that you mentioned above. We do not have a COLMAP script specific to this repository for extracting poses.

@1zgh with regards to your first question, depending on your camera configuration, you should probably modify either the donerf_voxel, donerf_sphere, or donerf_cylinder model configuration files. The last two will work better if your scene is "outward facing" (all cameras live within some small-ish viewbox). You can likely use the llff dataset class (in datasets/llff.py), but this may require some minor modifications so that it works with non forward facing scenes?

Also note that our approach works best for denser captures --- it can better capture complex view dependent effects like distorted reflections & refractions than methods like NeRF given enough captures, but may struggle if your input views are relatively sparse (as mentioned in our paper).

I think that I'll go through the process of creating a custom dataset / running the method either later this week or next, and try to write up a step-by-step guide. In the meantime, please follow up with any additional questions, should you have them.

ZhenyanSun commented 1 year ago

Thanks very much.

1zgh commented 1 year ago

Thanks! I will try. I see you mentioned distorted reflection and refraction, is this reconstructed for non-Lambertian objects? Mirrors, water surfaces, things like that. And there's an algorithmic optimization for that, right?

benattal commented 1 year ago

Yep! Given enough views, our method should be able to reproduce the appearance of such objects. For example see the reflections on the CD and refractions through the bottle of liquid in this scene:

https://user-images.githubusercontent.com/2993881/211887264-2874afc8-d4de-4990-a7cc-351c1903efcf.mov

1zgh commented 1 year ago
  1. I replicated the same effect in the CD dataset, which worked well. But now we're having some trouble with our own datasets.
  2. I have tried to use scenes that are inward facing (all cameras shooting around objects). The llff configuration file (run_one_llff.sh) is currently used in the training process, but the training loss and psnr effect are poor. When I tried to run my own data set using donerf's configuration file, I was prompted that many files were missing (sorry, I haven't used this data set before). Is there any detailed guidance for this type of scenario? Thanks!!
  3. Here is the result of my llff configuration file, psnr=15.08. And you can only look forward when you're visualizing, whereas I shoot around objects. The data set used is the refnerf data set. gs1
benattal commented 1 year ago

Is the from the "real" dataset provided on the Ref-NeRF website (see below)?

image

If so, let me try our method on this data and get back to you.

Note that our method works quite well for densely captured scenes, but that it's difficult to capture inward facing 360 degree scenes with high angular density (so it might not work super well in this setting).

1zgh commented 1 year ago

Yes! It's a real data set from the website. I'm going to try something similar. That it's difficult to capture inward facing 360 degree scenes with high angular density (so it might not work super well in this setting).Could you please tell me the reason for this result? geometric primitives?

benattal commented 1 year ago

Yep, because of the sample prediction network, which predicts geometric primitives / sample points that vary depending on the input (4D) ray. This allows the framework to reproduce complex view dependence, but also means that the samples are not explicitly constrained for rays that are not observed during training (and so it tends to lead to worse interpolation / extrapolation quality in sparse view settings).

This is one of the main limitations of the work, which we discuss in the conclusion of the paper:

image

1zgh commented 1 year ago

thank! I see. But the experimental scenes in real nerf are still curious.

benattal commented 1 year ago

No problem! And I agree -- it's still definitely worth looking into.

1zgh commented 1 year ago

I am very willing to work with you on this work. Is there anything I can do to help speed up the progress of this work?