Closed qhdqhd closed 2 years ago
Hi there, first of call: cool setup!
(1) shouldn't be a problem as long as the camera poses are precise enough for the Train extrinsics
option to fix any slight inaccuracies.
I suspect the problem comes more from (2) and (3).
For (2), we recently (yesterday) pushed support for per-camera metadata. You can customize it via the python bindings
testbed.nerf.training.dataset.metadata[image_id].camera_distortion = ...
testbed.nerf.training.dataset.metadata[image_id].focal_length = ...
testbed.nerf.training.dataset.metadata[image_id].principal_point = ...
More specifics are in python_api.cu
. The .json
-based loader unfortunately only supports a single set of camera parameters per json file.
For (3), this will largely manifest as artifacts when trying to view the scene from the top of bottom (i.e. outside of the convex hull of the training data). If you plan for the viewpoint to stay close to the ring of cameras, you should be fine.
Curious to hear whether you find the Python bindings w.r.t. (2) helpful.
Cheers!
My dataset contains about 100 views, the viewpoints form a ring, all viewpoints are looking towards the circle center. The cameras are like this:
The scene size is about 15 meters.
The first row shows 5 rendered views of the training view. The second row shows 5 training views.
My parameter: aabb_scale is set to 16 and I change the scale or offset in the camera positions as https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md, so the scene has been fully covered by the box.
The people in the center of the scene renders fine, but there are quite a few white floating objects or ghosting around the people. I would like to ask why? There are three possible reasons I can think of: