gkiavash / Master-Thesis-Structure-from-Motion

1 stars 1 forks source link

Localize Point Cloud #13

Open gkiavash opened 1 year ago

gkiavash commented 1 year ago

The task is to localize the point cloud obtained from the SfM pipeline on maps. To do so, first, we will generate both sparse and dense reconstructions with refinements. Then, snapshots are taken from the above point of view. And finally, I tried to use feature matching algorithms to find the location of the point cloud and poses on Google Maps (or Google Earth).

gkiavash commented 1 year ago

Here are some of the snapshots from different locations and the corresponding target images on google earth:

1) Capture2 snapshot00 Capture

2) street_3_3 query snapshot00 snapshot01 street_3_3 target

As preprocessing step

gkiavash commented 1 year ago

As a preprocessing step, I assumed that the edges represent streets and their angles. So, I tried edge detections with erosion and dilation on both images to make them similar, like:

img_matches2 (9)

I also tried using pixel-perfect codes to generate feature maps by CNNs. However, both preprocessing didn't success

gkiavash commented 1 year ago
  1. I tried SIFT and ORB from the OpenCV library to match those images and here are some of the results:

img_matches (1)

img_matches (2)

img_matches2 (9)

  1. Since our reconstruction contains more information about the walls, I tried to match meshes with 3D maps from angles that the walls are visible:

img_matches_sift (1)

  1. Unfortunately, none of these tests worked. In order to be sure that the code works I tried cropping the map itself as the source image and matching it with the whole map and it worked:

img_matches_sift

img_matches (7)

gkiavash commented 1 year ago

I also tried the basic approach you mentioned that I choose key points manually, like the corners of streets, and search them on the target image. However, still, it isn't able to find and the similarity of descriptors is too low.

By having a closer look at both source and target images, I see that many details of streets are occluded by either buildings or the shadow of buildings. The number of cars is different, and the visible buildings have different color because of the illumination

A Successful Approach

Point cloud registration works. As we can obtain great details of streets, angles, and walls by SfM, I tried a supervised approach by generating a perfect dense reconstruction of streets as the target point cloud. And, register a smaller point cloud from the different video as a source point cloud.

In detail, I ran dense reconstruction over 325 images which contain 8 streets, with extreme quality, like increasing the number of PatchMatch iterations, wider windows, high-quality images, pixel-perfect refining, etc. And, then I captured another video with my own phone on one of those streets and generated both sparse and dense source point clouds. I used RANSAC and ICP global and local registration from Open3D library to register it and here is the result:

bhjghj

gkiavash commented 1 year ago

Registering in City Point Cloud

I sliced the city point cloud on the z plane and removed points related to the roofs of buildings, i.e. removed points with z coordinates higher than a threshold. The point cloud is converted to: snapshot02

I also applied the same filtering to our dense reconstruction in order to have only the points related to the grounds and streets to be the same as the new city point cloud. Also, I filtered points in high density to have a uniform distribution like city point cloud:

snapshot03

Now, I believe it is easier to register our reconstruction in the new filtered point cloud. We need to find a good 3D feature extractor and fine tune the parameters

after

after2