Closed shaunktw closed 4 years ago
So I think we can extract the camera position and angle from opensfm/reconstruction.json:
"shots": {
"DSC00298.JPG": {
"orientation": 1,
"camera": "v2 sony dsc-wx220 1800 2400 perspective 0",
"capture_time": 1467198160.0,
"gps_dop": 15.0,
"rotation": [
2.4804502253475951,
-1.5131942004329673,
-0.15625880101596284
],
"translation": [
-19.708380746486213,
-46.029045816851102,
14.960096193751962
],
"gps_position": [
-31.69669311275203,
-40.237064567700145,
1.9997939048334956
]
},
It's a matter of exporting that to a usable format and then joining that with info from the texturing data. This could be solved in part in ODM and in part in WebODM with @pierotofy .
reconstruction.json
could be probably imported in the potree viewer in WebODM, so visualization of the cameras shouldn't be too difficult to add. Tying in the rays would be a little more involved.
I think tying the camera to a face will be the most challenging. I'm going to ping the mvs-texturing guys to see if we can extract that info.
what should be used for the camera position? gps_position or translation? (or rather: how do i get the x,y,z coordinates and orientation from the data of the json file?)
Good question! I think you have to use both, you must translate the position to match the reconstruction. But I've not tried to do that.
seems strange that they have to be translated again. what are the gps coordinates referenced to in the first place then? i will try both tomorrow and then report back with what is right
Probably because the sparse point cloud is unreferenced.
ok this is using only the gps_position http://i.imgur.com/r5SS5nP.png and this is using only the translation: http://i.imgur.com/3KluTBn.png
i have no idea what the hell translation does. it seems that gps_position works fine though
currently struggling with the rotation. i have a feeling the rotation data is completely unrealiable. looking at this here it looks like the cameras are just pointing in random directions, while they were most probably all pointing down. blue lines are vec3(0,1,0) rotated using the rotation of reconstruction.json http://i.imgur.com/A1y32yU.png
(forgot to mention the model is rotated (-90,0,0) to fit in with the cameras coordinate system, could that be the issue?)
seems i forgot to rotate the rotation itself too, but the problem persists, they are still pointing in opposite directions http://i.imgur.com/HvhWNRN.png
OpenSfM already displays the camera positions with their built-in viewer, so perhaps look how they do it in their source: https://github.com/mapillary/OpenSfM/blob/master/viewer/reconstruction.html
that helped regarding the camera rotation. seems like translation and rotation together give the cameras rotation and position, but they have to be mixed together (see the code you linked). i initially thought that rotation would just be an euler, but it seems to be a rotation around an axis that is used on the translation. i wonder why they didnt just include the optical center and the rotation as a quaternion. would be much less confusing
Next issue: GPS Altitude
it seems the gps_positions z value is almost the same for every shot http://i.imgur.com/9Xzwi8y.png but the actual gps altitude from the exif data varies greatly. http://i.imgur.com/eWh3Yph.png
is this intended behaviour or a bug?
how did you take the images? I belive some UAV use the barometric pressure to maintain altitude, because the GPS position has a bigger error. So maybe the actual camera positions are almost the same for every shot, even if the exif GPS data varies so much.
if it is called gps_position it should use the gps data right? i dont know anything about the drone that took the photos
For the Bellus dataset, the images were taken with a Sony WX-220 on a SenseFly eBee drone. The images are referenced by the flight planning software using drone logs. IIRC the eBee does use barometric pressure to monitor altitude.
I am trying something similar - display cameras over the point cloud and on clicking them image should display over point cloud.
I used the rotation matrix from bundle_r000.out and mapping it with img_list.txt
However, though the cameras look like aligned properly, the images are not displayed properly.There seems to be axes orientation issue..
ODM now generates a shots.geojson
in odm_report
that can be used to display the camera angles/positions in a 3D viewer (WebODM already implements that, and displays thumbnails too..).
If something was missed and still needs to be implemented, please re-open? :pray:
I think this is broadly complete. Pix4D does some cool stuff with tracing rays back to their origin etc., but I get the sense it's mostly just for oohs and ahhs.
Pix4D has a Ray Cloud feature to cut the thumbnails of the cameras at the location where the point is visible in the original images. An example: https://support.pix4d.com/hc/en-us/articles/202557999-Menu-View-rayCloud-3D-View#gsc.tab=0. This would be a meaningful feature for the following reasons: