Closed msanchezvicom closed 1 year ago
How did you compute the camera poses, they look like they might be wrong in the screenshot provided?
Hello @tancik , I think poses are good, but I could be off. Based on the GPS and gimbal data, I can figure out both the cameras' translation and rotation in the world coordinate system. According to this link: https://docs.nerf.studio/en/latest/quickstart/data_conventions.html#camera-view-space, I assume that +Z points backward and away from the camera. Check out this image to get a clearer idea (just showing a few cases):
However, the tricky part is that not all aerial images are top-down. Some have orthographic views, and some look towards the horizon.
Currently, I am attempting to obtain poses using COLMAP. However, not all poses can be accurately estimated.
Do you think I could improve the poses or the training step of Nerfacto? I would greatly appreciate any insights on this. Thank you!
Poses calculated with the ns-process-data module appear to yield better results, although not all of the images have been processed:
However, both the reconstructed scene and the extracted mesh exhibit significant artifacts, resulting in the loss of many details and data:
Reconstructed scene:
Extracted mesh:
ns-export poisson --load-config outputs/output/nerfacto/2023-06-23_075635/config.yml --output-dir exports/mesh/ --target
-num-faces 50000 --num-pixels-per-side 2048 --normal-method open3d --num-points 1000000 --remove-outliers True --use-bounding-box True --bounding-box-min -1
-1 -1 --bounding-box-max 1 1 1
As you can see, many artifacts are present and important information is lost in the resulting output.
Maybe take a look at https://github.com/nerfstudio-project/nerfstudio/issues/1110#issuecomment-1541021834
Unfortunately, I cannot achieve better results than the ones I have already shown. I will now close the issue. If anyone has further information on how to manage this kind of data, please let me know.
Hi!
I am trying to reconstruct a 3D scene recorded by a drone. The images I have look like the ones below:
I also have access to the drone's GPS information and the Gimbal rotation axes, which provide the poses associated with each image:
To train the network, I executed the following command:
ns-train nerfacto --data {PROCESSED_DATA_DIR} --pipeline.datamanager.camera-optimizer.mode off
Even though the cameras' positions seem to be correctly estimated (as I believe from the output):
The reconstructed scene does not meet my expectations, as it contains numerous artifacts. I have attempted to address this issue by following the suggestions mentioned in #1110 , but I have not been able to achieve satisfactory results:
I would greatly appreciate any insights or recommendations you can provide to improve the quality of the reconstructed scene.
Thank you for your time!