OpenDroneMap / ODM

A command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images. 📷
https://opendronemap.org
GNU Affero General Public License v3.0
4.89k stars 1.11k forks source link

ODM to display the camera angles to associate images with 3D Mesh #557

Closed shaunktw closed 4 years ago

shaunktw commented 7 years ago

Pix4D has a Ray Cloud feature to cut the thumbnails of the cameras at the location where the point is visible in the original images. An example: https://support.pix4d.com/hc/en-us/articles/202557999-Menu-View-rayCloud-3D-View#gsc.tab=0. This would be a meaningful feature for the following reasons:

dakotabenjamin commented 7 years ago

So I think we can extract the camera position and angle from opensfm/reconstruction.json:

  "shots": {
            "DSC00298.JPG": {
                "orientation": 1, 
                "camera": "v2 sony dsc-wx220 1800 2400 perspective 0", 
                "capture_time": 1467198160.0, 
                "gps_dop": 15.0, 
                "rotation": [
                    2.4804502253475951, 
                    -1.5131942004329673, 
                    -0.15625880101596284
                ], 
                "translation": [
                    -19.708380746486213, 
                    -46.029045816851102, 
                    14.960096193751962
                ], 
                "gps_position": [
                    -31.69669311275203, 
                    -40.237064567700145, 
                    1.9997939048334956
                ]
            }, 

It's a matter of exporting that to a usable format and then joining that with info from the texturing data. This could be solved in part in ODM and in part in WebODM with @pierotofy .

pierotofy commented 7 years ago

reconstruction.json could be probably imported in the potree viewer in WebODM, so visualization of the cameras shouldn't be too difficult to add. Tying in the rays would be a little more involved.

dakotabenjamin commented 7 years ago

I think tying the camera to a face will be the most challenging. I'm going to ping the mvs-texturing guys to see if we can extract that info.

fredlllll commented 7 years ago

what should be used for the camera position? gps_position or translation? (or rather: how do i get the x,y,z coordinates and orientation from the data of the json file?)

dakotabenjamin commented 7 years ago

Good question! I think you have to use both, you must translate the position to match the reconstruction. But I've not tried to do that.

fredlllll commented 7 years ago

seems strange that they have to be translated again. what are the gps coordinates referenced to in the first place then? i will try both tomorrow and then report back with what is right

dakotabenjamin commented 7 years ago

Probably because the sparse point cloud is unreferenced.

fredlllll commented 7 years ago

ok this is using only the gps_position http://i.imgur.com/r5SS5nP.png and this is using only the translation: http://i.imgur.com/3KluTBn.png

i have no idea what the hell translation does. it seems that gps_position works fine though

fredlllll commented 7 years ago

currently struggling with the rotation. i have a feeling the rotation data is completely unrealiable. looking at this here it looks like the cameras are just pointing in random directions, while they were most probably all pointing down. blue lines are vec3(0,1,0) rotated using the rotation of reconstruction.json http://i.imgur.com/A1y32yU.png

(forgot to mention the model is rotated (-90,0,0) to fit in with the cameras coordinate system, could that be the issue?)

seems i forgot to rotate the rotation itself too, but the problem persists, they are still pointing in opposite directions http://i.imgur.com/HvhWNRN.png

pierotofy commented 7 years ago

OpenSfM already displays the camera positions with their built-in viewer, so perhaps look how they do it in their source: https://github.com/mapillary/OpenSfM/blob/master/viewer/reconstruction.html

fredlllll commented 7 years ago

that helped regarding the camera rotation. seems like translation and rotation together give the cameras rotation and position, but they have to be mixed together (see the code you linked). i initially thought that rotation would just be an euler, but it seems to be a rotation around an axis that is used on the translation. i wonder why they didnt just include the optical center and the rotation as a quaternion. would be much less confusing

Next issue: GPS Altitude

it seems the gps_positions z value is almost the same for every shot http://i.imgur.com/9Xzwi8y.png but the actual gps altitude from the exif data varies greatly. http://i.imgur.com/eWh3Yph.png

is this intended behaviour or a bug?

LucasMM86 commented 7 years ago

how did you take the images? I belive some UAV use the barometric pressure to maintain altitude, because the GPS position has a bigger error. So maybe the actual camera positions are almost the same for every shot, even if the exif GPS data varies so much.

fredlllll commented 7 years ago

if it is called gps_position it should use the gps data right? i dont know anything about the drone that took the photos

dakotabenjamin commented 7 years ago

For the Bellus dataset, the images were taken with a Sony WX-220 on a SenseFly eBee drone. The images are referenced by the flight planning software using drone logs. IIRC the eBee does use barometric pressure to monitor altitude.

garlac commented 5 years ago

I am trying something similar - display cameras over the point cloud and on clicking them image should display over point cloud.

I used the rotation matrix from bundle_r000.out and mapping it with img_list.txt

However, though the cameras look like aligned properly, the images are not displayed properly.There seems to be axes orientation issue..

pierotofy commented 4 years ago

ODM now generates a shots.geojson in odm_report that can be used to display the camera angles/positions in a 3D viewer (WebODM already implements that, and displays thumbnails too..).

If something was missed and still needs to be implemented, please re-open? :pray:

smathermather commented 4 years ago

I think this is broadly complete. Pix4D does some cool stuff with tracing rays back to their origin etc., but I get the sense it's mostly just for oohs and ahhs.