openMVG / openMVG

open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.
Mozilla Public License 2.0
5.6k stars 1.66k forks source link

How to splice and fuse multiple known point cloud files? #2017

Open RedOrient opened 2 years ago

RedOrient commented 2 years ago

Hello, I have exported the corresponding point cloud files from the pictures taken from different locations.

Now I hope to get the extranic through openmvg, and then use these parameters to fuse each point cloud into a point cloud. I want to know whether this is feasible and how to operate.

Thank you~

pmoulon commented 2 years ago

Does the point cloud are coming from the same SfM scene (same global coordinates), if yes, just put them in your 3D mesh/point cloud viewer (cloudcompare, meshlab).

If they are coming from different scenes, they are most likely all have their own local coordinate system, you could use GPS, or ICP registration

RedOrient commented 2 years ago

Thank you for your answer. In fact, I'm not sure whether these point clouds come from the same SFM scene. I only know that these point clouds are obtained from cameras in different locations in the same room.

Here are my point clouds and pictures,If you know whether these point clouds come from the same SFM scene after looking at the photos and point clouds, please let me know

PS: In addition, you mentioned GPS and ICP registration. Does openmvg provide these functions? Out.zip

pmoulon commented 2 years ago

OpenMVG can use GPS info see online doc

RedOrient commented 2 years ago

I put the rebuilt SFM data. Convert bin to SFM data. JSON, and then get the external parameters R and C of each frame,for example

"key": 84,
"value": {
    "rotation": [
        [
            -0.4368171113280469,
            -0.06197143503282726,
            -0.897413144817365
        ],
        [
            0.025042138626564803,
            0.9963997632643671,
            -0.08099631509964953
        ],
        [
            0.8992017029254498,
            -0.0578537207678631,
            -0.4336925690501113
        ]
    ],
    "center": [
        -1.8270587481193035,
        -0.31906189261721398,
        -5.905284453279028
    ]
}

According to my understanding

  1. R is the rotation matrix from the world coordinates of each frame to the camera coordinates,

  2. C is the world coordinate position corresponding to the camera position of each frame

  3. T is the translation matrix from the world coordinates of each frame to the camera coordinates, and T = - 1 R C

Therefore, I get the R and T of converting the world coordinates of each frame into the camera coordinates. At the same time, I have the point cloud of each frame in the camera coordinate system. In theory, I can calculate the point cloud of each frame in the world coordinate system

By formula:

Camera coordinate position = R * world coordinate position + T

Can be calculated

World coordinate position = R.T * (camera coordinate position - T)

I think there is no problem with the theory, but the point cloud I calculated does not seem to belong to the same coordinate system. Please check whether there is a problem with my steps. Thank you very much~~