POSTECH-CVLab / SCNeRF

[ICCV21] Self-Calibrating Neural Radiance Fields
MIT License
463 stars 45 forks source link

How to run experiment with only photos? #3

Closed franciscoWizz closed 2 years ago

franciscoWizz commented 2 years ago

Hi Mr,

I would like to run an experiment with your model using a list of pictures from an object to get the estimated camera poses of each picture. How can I mount that experiment?

Thanks in advance,

jeongyw12382 commented 2 years ago

We have not fixed the data loader part since our codes were assumed to have camera information to compare with NeRF + COLMAP with our model. A simple trick for this solution is to remove the file loading parts of the LLFF loader and set all the poses to be equal.

I've just added the sample code for running with custom images in the new branch "custom". The code here would be not perfect since it is not verified perfectly. Furthermore, I'm not sure what kind of task you are planning to do. Thus, evaluating projected ray distance loss on the test set is currently unavailable in the "custom" branch. You should modify the code slightly by adjusting codes to reflect your idea. In the current version, the "test" set is equal to the "train" set.

python run_nerf.py --config configs/llff_data/flower.txt --expname $(basename "${0%.*}") --chunk 8192 --N_rand 1024 --camera_model pinhole_rot_noise_10k_rayo_rayd --ray_loss_type proj_ray_dist --multiplicative_noise True --i_ray_dist_loss 10 --grid_size 10 --ray_dist_loss_weight 0.0001 --N_iters 800001 --ray_o_noise_scale 1e-3 --ray_d_noise_scale 1e-3 --add_ie 200000 --add_od 400000 --add_prd 600000 --lrate_decay 400 --dataset_type custom --run_without_colmap both

Don't forget to add "--run_without_colmap both" while running the code. Ignoring it might result in a wrong initialization. Feel free to ask further questions about code usage.

jeongyw12382 commented 2 years ago

The code in the "custom" branch would not be merged into the master branch since it is only for reference. If you have further questions about extending on other datasets, feel free to mail me "jeongyw12382@postech.ac.kr". The script above is a sample for running the code in the "custom" branch.

jeongyw12382 commented 2 years ago

Announce me if the newest version works fine on your environment @franciscoWizz. After your announcement, I'll close the issue.

franciscoWizz commented 2 years ago

Ok thank you so much mr, I'll give it a try and let you know

xufengfan96 commented 2 years ago

Ok thank you so much mr, I'll give it a try and let you know

Hello,

Do you try to get camera pose about each image like a 4X4 matrix which include rotation matrix and translation matrix? And do you succeed in getting it?

Looking forward to your reply.

jeongyw12382 commented 2 years ago

Sorry for being late. I have added descriptions on the new issue you have just added. @xufengfan96

jeongyw12382 commented 2 years ago

Please reopen the issue whenever you need help in this issue.

hmdolatabadi commented 2 years ago

@jeongyw12382 Hi. Thanks for the interesting paper and providing the code. I read through the custom code run snippet that you provided above:

python run_nerf.py --config configs/llff_data/flower.txt --expname $(basename "${0%.*}") --chunk 8192 --N_rand 1024 --camera_model pinhole_rot_noise_10k_rayo_rayd --ray_loss_type proj_ray_dist --multiplicative_noise True --i_ray_dist_loss 10 --grid_size 10 --ray_dist_loss_weight 0.0001 --N_iters 800001 --ray_o_noise_scale 1e-3 --ray_d_noise_scale 1e-3 --add_ie 200000 --add_od 400000 --add_prd 600000 --lrate_decay 400 --dataset_type custom --run_without_colmap both

Is this going to run NeRF or SCNeRF? Cause I saw a little difference with the .sh files in the original repo, where you add a --ft_model also to run SCNeRF. Also, could you please tell that after training is done, how we can render a video using our trained model?

Thanks for your help in advance.

vishnukool commented 2 years ago

Hi @jeongyw12382 Two more questions, if you don't mind:

  1. Does the code in "custom" branch with --dataset_type custom work for 360 degree scenes like the "tanks_and_temples" images ? Or is it only for forward facing scenes like LLFF fern dataset?
  2. If it does work for 360 scenes, can you confirm that it doesn't need any COLMAP camera parameters, initialization, etc. ?
jeongyw12382 commented 2 years ago

@hmdolatabadi The script you've mentioned will run SCNeRF since the camera model is set to "pinhole_rot_noise_10k_rayo_rayd." --ft_path loads the pre-trained model. Thus, if you are running four stages independently, then you should add the ft_path to load the pre-trained model of the previous stage. However, if you run the script above, the code will automatically run the four stages sequentially. If you need more help, please let me know.

jeongyw12382 commented 2 years ago

@vishnukool I'll respond to this issue on your newly uploaded issue.

hmdolatabadi commented 2 years ago

@jeongyw12382 Thanks for your prompt reply. Appreciate it a lot. Can you also answer my second question as of how can I generate a video sequence of the scenes using the model after training? Thanks.

jeongyw12382 commented 2 years ago

@hmdolatabadi

Depending on the data, video rendering steps are different.

1) Tanks and Temples When we fully utilize the train and test data and connect the rendered images, we can get the video sequence.

2) LLFF https://github.com/POSTECH-CVLab/SCNeRF/blob/dc57e9f6e763284a12bed4812e6945f49ee0ef5e/NeRF/run_nerf.py#L115 When you render the poses in the variable "render_poses", you can get natural image sequences. Then, concatenate the images to generate a video. (Spherified Camera Pose)

3) Synthetic https://github.com/POSTECH-CVLab/SCNeRF/blob/dc57e9f6e763284a12bed4812e6945f49ee0ef5e/NeRF/run_nerf.py#L153 When you render the poses in the variable "render_poses", you can get natural image sequences. Then, concatenate the images to generate a video. (360 Camera Pose)

I recommend the links below to generate a camera path and render videos

hmdolatabadi commented 2 years ago

@jeongyw12382 Thanks a lot. I will checkout the links and functions and see how it goes. Thanks again.

hmdolatabadi commented 2 years ago

@jeongyw12382 I tried training a model on images only, and everything went well. Now, to generate a video sequence, I tried the above codes, but all of them need the render_poses variable. I guess my original question comes back to this: when you only have images, how do you generate the render_poses variable so that you can generate/render images? Thanks.

jeongyw12382 commented 2 years ago

@hmdolatabadi

It depends on the type of camera trajectories you want to render.

  1. If you want to generate a 360-scene video, then based on the estimated pose, you can generate the spherical pose with the code here. https://github.com/POSTECH-CVLab/SCNeRF/blob/dc57e9f6e763284a12bed4812e6945f49ee0ef5e/NeRF/load_blender.py#L144

  2. If you want to generate the same camera trajectory with the train/val/test set (assuming video), then there are two choices

    • If frames are sufficiently many, then you can directly re-render the train/val/test set to generate the video.
    • If frames are insufficiently many, then you should implement codes that estimate intermediate camera trajectory from given frames(poses).
  3. If you want to generate a circular path (refer to the LLFF dataset video), then you should utilize the code below. https://github.com/POSTECH-CVLab/SCNeRF/blob/dc57e9f6e763284a12bed4812e6945f49ee0ef5e/NeRF/load_llff.py#L299

  4. Otherwise, probably you should implement codes generating a camera trajectory.