kwea123 / nerf_pl

NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning
https://www.youtube.com/playlist?list=PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV
MIT License
2.74k stars 483 forks source link

Colmap gui + imgs2poses.py still gets error #60

Closed alex04072000 closed 3 years ago

alex04072000 commented 3 years ago

I am using imgs2poses.py to estimate the camera poses for my own dataset. However, it always returns with ERROR: the correct camera poses for current points cannot be accessed. On the other hand, I can use colmap gui to reconstruct a part of the camera poses. (e.g. not pose of every image is estimated) And I execute 'imgs2poses.py' with the same argument on the sparse folder and database.db generated by colmap gui. However, it still returns with ERROR: the correct camera poses for current points cannot be accessed. Can you give me instructions about how to use colmap gui + imgs2poses.py to make the pose estimation work? Thank you!

kwea123 commented 3 years ago

The code of LLFF (imgs2poses.py) requires that every image to be estimated, so it returns error in your case. You can modify imgs2poses.py and my llff.py so that it only reads the correctly estimated images and discard the rest, or you need to tune COLMAP so that all image poses are estimated.

kwea123 commented 3 years ago

If you want to modify the python files, I suggest that you bring the content of imgs2poses.py (all you need is this function actually) to llff.py, i.e. read colmap files directly from llff.py as it allows every variable to be visible inside llff.py.

I will probably make a commit to dev to do this, i.e. eliminating the dependency on imgs2poses.py from LLFF code.

alex04072000 commented 3 years ago

Thank you for your fast reply! I actually filtered out those images that are not reconstructed by colmap gui the folder images. For more details, I also ran dense reconstruction with colmap gui. And it will generate a folder dense that contains all the images using for dense reconstruction. I use these images as the filtered images. Should this be acceptable for imgs2poses.py to read as the images in images folder all have camera poses? Or I am misunderstanding some part of colmap? Thank you!

kwea123 commented 3 years ago

Hmm, I don't have experience using the dense reconstruction. Can you try changing the strings sparse to dense in this function (and maybe other paths) and run imgs2poses.py to see if it succeeds?

alex04072000 commented 3 years ago

Thank you for your suggestions. I modified imgs2poses.py a little bit to ignore all the images that are not reconstructed by colmap gui. However, this might cause some problems in the boundary npy file. (Not sure) I will train the data first and see if it works. Thank you very much again!

wolterlw commented 3 years ago

I also get the same error, but my suspicion is that is because camera only moves along one axis in my case. How to go about constructing boundary.npy file by yourself?
I have my camera intrinsic parameters, I have depth data and tags in my image, so I am able to reconstruct camera pose for each image, however I'm not sure how to fill out the boundary.npy file so that it works with your pipeline. @kwea123 could you elaborate a bit on how to make rotation matrices compliant with your code? btw thanks for a great repo! so far it's the easiest to interact with among the ones I found.

kwea123 commented 3 years ago

@wolterlw Thanks for your kind words! Do you mean poses_bounds.npy? If you use your own camera poses (not from COLMAP), I'm afraid you need to do some code modifications.

Basically you need to do the followings:

  1. Set the intrinsics according to image size https://github.com/kwea123/nerf_pl/blob/422ce4bf5755aec5ca7ff4b24868b21f03145e37/datasets/llff.py#L188-L193
  2. Set the extrinsics in "camera to world" format, and change its orientation to "right up back" https://github.com/kwea123/nerf_pl/blob/422ce4bf5755aec5ca7ff4b24868b21f03145e37/datasets/llff.py#L195-L199
  3. Find the nearest bounds for each image and scale the poses so that everything lies roughly inside [-1, 1] https://github.com/kwea123/nerf_pl/blob/422ce4bf5755aec5ca7ff4b24868b21f03145e37/datasets/llff.py#L205-L211

I believe the code is sufficiently commented, you just need to follow these steps. Do not hesitate to post another issue, or to share your data if you have further questions.

kwea123 commented 3 years ago

@alex04072000 Can you share your data where COLMAP fails to estimate the pose for some images? I want to test my code to handle this case. Thanks!

kwea123 commented 3 years ago

The dependency of LLFF code is now removed (on dev branch). The pose from COLMAP is read and converted inside datasets/llff.py directly, and the result is verified to match the LLFF poses_bounds.npy (so no need to change anything for previously trained models).

The remaining task is to check if this code really ignores (I believe it does) the unreconstructed images and can successfully use the rest (the good images) to train. For this I'll need your help if you can provide the data or check by yourself. I will close this issue for now, feel free to comment if you still encounter error.

kainataltaf commented 3 years ago

Hey,

I am facing the same issue. I am using the bird dataset (https://vision.in.tum.de/data/datasets/3dreconstruction). Can you tell me what should I do?

Can you provide the link to your silica dataset?

kwea123 commented 3 years ago

@kainataltaf Are you using the latest code on dev branch? What's the full error message?

silica data is here

kainataltaf commented 3 years ago

@kwea123 No, I am using the master branch. The error is ERROR: the correct camera poses for current points cannot be accessed.

kwea123 commented 3 years ago

The master branch is old code to support colab usage. If you don't use colab, please switch to dev branch and see if the error persists. If you need to use colab and the master branch absolutely, the error won't be fixed there. You need to re-take the photos until COLMAP reconstructs every camera poses correctly.

kainataltaf commented 3 years ago

I am using colab.

hetolin commented 3 years ago

Hi, when I use the code from the dev branch, it seems it still encounter an error like that: File "/nerf_pl/datasets/llff_1.py", line 216, in read_meta visibilities[j-1, i] = 1 IndexError: index 25 is out of bounds for axis 0 with size 23 I use the data created by colmap GUI, but not all image poses are estimated. It seems this code fails to really ignores the unreconstructed images and can successfully use the rest (the good images) to train.

asad-ak commented 2 years ago

The dependency of LLFF code is now removed (on dev branch). The pose from COLMAP is read and converted inside datasets/llff.py directly, and the result is verified to match the LLFF poses_bounds.npy (so no need to change anything for previously trained models).

The remaining task is to check if this code really ignores (I believe it does) the unreconstructed images and can successfully use the rest (the good images) to train. For this I'll need your help if you can provide the data or check by yourself. I will close this issue for now, feel free to comment if you still encounter error.

In the dev branch how do I do step 3 of my own data that is "Run colmap sparse reconstruction"? Do I need to do it from the GUI directly since there is no more LLFF - imgs2poses.py to do it for me