snavely / bundler_sfm

Bundler Structure from Motion Toolkit
1.55k stars 483 forks source link

Exif data + Sparse reconstruction #38

Closed v4ven27 closed 5 years ago

v4ven27 commented 8 years ago

Hi, I'm trying to use Bundler to create a sparse map of the following structure:

dji_0104

I have about 40 images (resized to 3600 x 2400) representing different views of the structure. The exif data for the image does not contain the pitch, roll and yaw (RPY) by default. I was curious to see how Bundler handled it- here's the output:

full_images

Next, I ran Bundler on images with RPY as part of the exif data and here's the output:

rpy_images

I'd appreciate any insight to help me understand the following:

Thanks!

snavely commented 8 years ago

Hi,

Bundler doesn't need RPY metadata -- in fact it doesn't use it even if it is there. However, it does expect focal length information in the Exif data and so if you are stripping that out it will likely produce bad results.

The point cloud in the second screenshot looks possibly okay -- hard to tell from a still image though.

The camera positions will be in the same coordinate system as the points, but the whole reconstruction will be in an arbitrary coordinate system -- definitely not a geocentric one, for instance.

Hope this helps,

Noah

On Aug 23, 2016 1:11 PM, "Venkataraman Ganesh" notifications@github.com wrote:

Hi, I'm trying to use Bundler to create a sparse map of the following structure:

[image: dji_0104] https://cloud.githubusercontent.com/assets/7323040/17905977/33471c7a-6943-11e6-83e1-99c2cdf7caa0.jpg

I have about 40 images (resized to 3600 x 2400) representing different views of the structure. The exif data for the image does not contain the pitch, roll and yaw (RPY) by default. I was curious to see how Bundler handled it- here's the output:

[image: full_images] https://cloud.githubusercontent.com/assets/7323040/17906361/f758a524-6944-11e6-9e55-b62d25940cbb.png

Next, I ran Bundler on images with RPY as part of the exif data and here's the output:

[image: rpy_images] https://cloud.githubusercontent.com/assets/7323040/17907845/b1e6c38e-694b-11e6-8d86-8dbef42b8d71.png

I'd appreciate any insight to help me understand the following:

  • I'm unsure if this is a correct sparse point cloud.
  • How critical is it for Bundler to be provided with the RPY data.
  • Are the camera poses in the bundler.out file represented in the world coordinate frame?

Thanks!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/snavely/bundler_sfm/issues/38, or mute the thread https://github.com/notifications/unsubscribe-auth/ABt6q2_ka38ngQ82xWx-PQuBJENTmYLyks5qi1PegaJpZM4JrVeo .

v4ven27 commented 8 years ago

Hi Noah,

Thanks for your answer! I'm a little confused about this statement:

The camera positions will be in the same coordinate system as the points, but the whole reconstruction will be in an arbitrary coordinate system -- definitely not a geocentric one, for instance.

I was under the impression that the converse was true: the points would be in the same coordinate frame as the camera pose (which I had thought would be in the world frame based on the GPS coordinates in the EXIF metadata).

I'd like to georeference this point cloud (sparse/dense) to compare it with similarly georeferenced objects. How can i determine a transformation from this arbitrary reference frame to a geocentric one?

Thanks!

snavely commented 8 years ago

You might think that the model would be georegistered in your case, but the truth is that Bundler ignores GPS coordinates in the Exif metadata.

In order to georeference the model, you will need to compute the transformation the maps the camera positions that Bundler computes to the geotags (unfortunately, Bundler doesn't have this feature). The easiest way to do this would be to convert your GPS positions to a coordinate system like ECEF or local ENU, and then use absolute orientations to compute the transformation from Bundler coordinates to ECEF/ENU. (There is absolute orientations code for instance in Matlab https://www.mathworks.com/matlabcentral/fileexchange/26186-absolute-orientation-horn-s-method?requestedDomain=www.mathworks.com .)

Hope this helps, Noah

On Wed, Aug 31, 2016 at 12:14 PM Venkataraman Ganesh < notifications@github.com> wrote:

Hi Noah,

Thanks for your answer! I'm a little confused about this statement:

The camera positions will be in the same coordinate system as the points,

but the whole reconstruction will be in an arbitrary coordinate system

definitely not a geocentric one, for instance.

I was under the impression that the converse was true: the points would be in the same coordinate frame as the camera pose (which I had thought would be in the world frame based on the GPS coordinates in the EXIF metadata).

I'd like to georeference this point cloud (sparse/dense) to compare it with similarly georeferenced object. How can i determine a transformation from this arbitrary reference frame to a geocentric one?

Thanks!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/snavely/bundler_sfm/issues/38#issuecomment-243869930, or mute the thread https://github.com/notifications/unsubscribe-auth/ABt6q1LZoMaoEnl9ux9DxIp7O_ZC_E_xks5qldKHgaJpZM4JrVeo .