centreborelli / satnerf

Satellite Neural Radiance Fields
https://centreborelli.github.io/satnerf/
BSD 3-Clause "New" or "Revised" License
113 stars 21 forks source link

Ray generation, Using SRTM library, and Rotation of the camera pose for novel views synthesis #9

Closed muthukumaranR closed 1 year ago

muthukumaranR commented 1 year ago

Hello! Thanks for the great work. application of nerf in satellite imagery is a cool idea and the implementation is great! I had some questions regarding the way the rays are generated.

  1. I am a little unclear on how the get_rays() in satellite.py works. As far as i understand, for each point defined by (row, col), two points are obtained, one for the min_alt observed in the image and another for max_alt. These two points, after normalization constitute a ray. This localization however, is not on a pixel level rather on a scene basis; we don't know the heights for each pixel and hence don't really know where the rays start and end for every (row, col) in the scene. I feel i am missing something here.
  2. Any particular reason to use SRTM (rpcm uses it) over the provided DEM maps?
  3. Do you know how i can generate novel views from the satnerf model by ray origin and direction interpolations? I have an idea on how to obtain the views on a pinhole camera model, but unsure on how to do on your rpc model. any tips are greatly appreciated!

Thanks for your time and thanks again for the great work!

rogermm14 commented 1 year ago

Hi @muthukumaranR ! Thanks for your comment.

  1. We extract min_alt and max_alt from the DFC2019 ground truth DEM data. Then get_rays() uses the RPC of each input image to find the 3D points that correspond to each pixel localized at max_alt and min_alt (= the starting and ending points of each ray, as explained in our paper).
  2. Not really. It could be replaced with another worldwide database.
  3. To mimic (locally) the behavior of a virtual RPC cameras and generate novel views apart from those of the test set, you could create your own affine camera models (where all rays of the virtual camera have the same direction). Keep in mind that all 3D point coordinates used for novel view synthesis must lie in the interval [-1, 1]. We will release this functionality someday, probably in early 2023.
keloee commented 1 year ago

Have you solved the novel views synthesis problem? I still have little idea about how should I rotate a RPC camera, a little help would be grateful!

Thanks for your time!