cdcseacave / openMVS

open Multi-View Stereo reconstruction library
http://cdcseacave.github.io
GNU Affero General Public License v3.0
3.33k stars 907 forks source link

DensifyPointCloud - Noisy point clouds #117

Closed kimoktm closed 8 years ago

kimoktm commented 8 years ago

Noisy point clouds when running DensifyPointCloud on a scene with fisheye lens. Tested on multiple scenes and all ends up with a noisy pattern.

Specifications like the version of the project, operating system, and hardware

openmvs_problem201 openmvs_problem00

cdcseacave commented 8 years ago

Did the sparse reconstruction complete fine? Can you share it as well?

kimoktm commented 8 years ago

@cdcseacave thanks alot for the software and the quick reply. Here is a screenshot of the sparse clouds sparse_cloud02

And here are the point clouds https://drive.google.com/open?id=0B1HwiB5wWmuUM3NEOTJ3dkg0Q2c

pmoulon commented 8 years ago

Did you tried the reconstruct mesh step in order to see if the noise can be removed?

kimoktm commented 8 years ago

@pmoulon I tried running Reconstructmesh but I got a segmentation error

09:19:42 [App ] Command line: scene_dense.mvs 09:19:43 [App ] Scene loaded (1s287ms): 61 images (61 calibrated) with a total of 18.14 MPixels (0.30 MPixels/image) 1404349 points, 0 vertices, 0 faces Segmentation fault (core dumped)

I've uploaded mvs files (scene and scene_dense) as well for analysis https://drive.google.com/open?id=0B1HwiB5wWmuUM3NEOTJ3dkg0Q2c

What I'm currently testing is running openMVG sequential pipeline on the undistorted_images to check if the problem is related to fisheye model or not.

rjanvier commented 8 years ago
pmoulon commented 8 years ago

@ORNis The dataset was shooted by @pyp22 => 73 shoots, Canon 7D MKII with Samyang 8mm f/3.5 UMC Fish-eye CS II Lens

rjanvier commented 8 years ago

ok, thanks. I will try it on my side if current @kimoktm's experimentation is not conclusive.

rjanvier commented 8 years ago

It just run fine here:

kimoktm commented 8 years ago

@ORNis thanks for the test. I ran openMVG_main_ComputeFeatures -m AKAZE_FLOAT -p HIGH with AKAZE_FLOAT features. The scene RMSE : 0.536442

kimoktm commented 8 years ago

@ORNis I'm sorry to bother you but can you share your sfm_data file so I can test weather the problem is in MVS or SFM

rjanvier commented 8 years ago

No pb https://we.tl/iPNwkt4FZe I use SIFT detector/descriptor in HIGH mode + ANNL2 matcher. RMSE was approx 0.2. Anyway, steps of rotations are too big between two images localized at each angle of the building, so images are sometimes poorly connected.

cdcseacave commented 8 years ago

Indeed, I am able to reproduce your problem @kimoktm. However I am not sure what is causing it (actually I have no even the smallest idea). This is how the mesh looks like. The noisy points in front were removed successfully, but the points behind the wall not, creating false cavities in the mesh.

Now I am not sure if the problem is just noise (false dense points generated due to no image overlap, etc) or really a bug in the algorithm that creates false points. However, the noisy points behind the wall are not removed during mesh reconstruction cause there are no points in front to mask them.

kimoktm commented 8 years ago

@cdcseacave that's interesting I hope it's an easy fix. Here are some tests I did unfortunately all of them resulted in the same problem:

  1. Running DensifyPointCloud with different resolutions
  2. Using the undistorted images as initial step for SFM
  3. I've also used @ORNis Sfm:

orin_scene00

@ORNis what version & options of openMVS are you using?

kafeiyin00 commented 8 years ago

Hi,everyone,i have met the same problem before. Maybe the baseline of two frames is too small in the DensifyPointCloud step. I solve the same problem by limitting the distantance of two frames, if the distantance is too small, i will discard the depthmap of the frame. but sometimes it still happens. image

you can see that the noises point to a center of a frame.

cdcseacave commented 8 years ago

Thx, these kind of experiments help. How did you limit the selected depthmaps? The best way to do this is to do it before depth map computation.

This is how this works: -from the SFM solution all visible neighbor images are selected for each image and scored; the score is a combination of baseline distance and resolution. -in the densification module, the best n neighbor views are selected for each image in order to compute the depth-map

You can play with how the score is computed for the neighbor images in SelectNeighborViews() in Scene.cpp. You can also play with how big n is. Also, you can play with how many views need to agree in order to accept a point during fusion time. The pipeline was never tested before on wide-lenses scenes, so some of these values may not be optimal.

kafeiyin00 commented 8 years ago

@cdcseacave thx for your reply and your code. My test data come from panorama(ladybug 5). Maybe the camera model is the core of the problem.

cdcseacave commented 8 years ago

True, my guess as well. Could you share some datasets with this camera for testing?

pmoulon commented 8 years ago

@kafeiyin00 Which tool did you used for SfM? Regarding the point cloud: On your images we see that the noisy point on straight lines starting from the same camera poses. Perhaps it comes from camera pairs that comes from the same poses must be be chosen for creating depth (because their baseline is very small or 0). BTW? if you reconstruct a mesh, they should disappear.

kafeiyin00 commented 8 years ago

@pmoulon I use the openMVG for SFM,thx a lot :-) I seprate(or reproject) the panoroma into several frame with same center, and I set limitation that frames come from same panoroma will not create a pair.

pmoulon commented 8 years ago

@kafeiyin00 Thx. since the distortion was removed when the panorama have been stitched,I think you have used the camera model 1 (Classic Pinhole with no distortion), BTW, if you have used Camera Model 3 (Pinhole radial 3) the distortion coefficient must be very small).

kafeiyin00 commented 8 years ago

@pmoulon thank you for your advice. i used the fisheye camera model because the ladybug5 have 6 fish eye cameras. i will test remove the distortion first, and use camera model 1

kafeiyin00 commented 8 years ago

@cdcseacave here some test data ladybug_panoramic_000003 ladybug_panoramic_000000 ladybug_panoramic_000001 ladybug_panoramic_000002

cdcseacave commented 8 years ago

@kafeiyin00 could you please detail also the process you currently use to arrive to the noisy point-cloud: applications used, command lines, splitting panoramas, etc?

kafeiyin00 commented 8 years ago

@cdcseacave i will update my code to github soon :-)

pmoulon commented 8 years ago

Perhaps it will be better to move the @kafeiyin00 case to a new issue, in order to do not perturb the initial question from @kimoktm

kafeiyin00 commented 8 years ago

@cdcseacave @pmoulon BTW,i am a master at wuhan university and i am interested in sfm and mvs, could you please recommend some basic paper or book about sfm or mvs

pmoulon commented 8 years ago

@kafeiyin00 I would recommend you to take a look to this list: https://github.com/openMVG/awesome_3DReconstruction_list And also => http://szeliski.org/Book/ => http://www.cse.wustl.edu/~furukawa/papers/fnt_mvs.pdf

kafeiyin00 commented 8 years ago

@pmoulon thx a lot

rjanvier commented 8 years ago

@kimoktm, I'm using the latest openMVS sources and I made the dense depth map computation at resolution level 3.

pmoulon commented 8 years ago

@kafeiyin00 I think we should start a special thread for your issue.

BTW, here some tests in order to split your spherical images in some pinhole images.

Here your image + some the 5 simulated pinhole camera view frustum footprint test_toto

Then I have made a resampling in order to create 5 pinhole images from the spherical image. I can release the code if you want (you can ask as many as image you want around the X direction). a4b31cd8-71f4-11e6-81bb-90f7e15a7ed3_0 a4b31cd8-71f4-11e6-81bb-90f7e15a7ed3_1 a4b31cd8-71f4-11e6-81bb-90f7e15a7ed3_2 a4b31cd8-71f4-11e6-81bb-90f7e15a7ed3_3 a4b31cd8-71f4-11e6-81bb-90f7e15a7ed3_4

BTW:

cdcseacave commented 8 years ago

Great work, @pmoulon! Did you test the dense reconstruction too with this method of extracted images to see if this corrects the noisy point-cloud?

caomw commented 8 years ago

@pmoulon What is BTW ?

Just5D commented 8 years ago

by the way

kafeiyin00 commented 8 years ago

@pmoulon i did not test resampling using the fisheye model, i used the Classic Pinhole with no distortion, can you release your resampling code, and i will test the dense reconstruction using the fisheye modle

ladybug_panoramic_000000_3 ladybug_panoramic_000000_6 ladybug_panoramic_000000_0

caomw commented 8 years ago

@Just5D Thanks!

pmoulon commented 8 years ago

@kafeiyin00 There is a misunderstanding: My tool resample the panorama to X Pinhole images (see here https://github.com/openMVG/openMVG/issues/621 for more info). I was proposing to use the raw ladybug FishEye images (I don't know if you have them or not)

cdcseacave commented 8 years ago

I also think the best results could be obtained by using the raw fish-eye images, not pre-processing them in any way. The SfM pipeline can exploit all info contained by the raw images. Please share them if you can.

kimoktm commented 8 years ago

After some tests I was able to avoid getting these noises by changing the baseline as suggested, however, the values are heavily related to the scene (poses, intrinsics) and often results in losing good segments of the scene. So @cdcseacave can you point where would be a good place to start looking where the problem is?

A side question: I'm trying to merge point clouds from different sources (MVS and Laser scans) can I run reconstruct mesh from the combined clouds as I can see that the algorithm requires the views at each point which isn't available from the laser scans?

cdcseacave commented 8 years ago

You can start by playing with the way the weights are computed for the neighbor views in SelectNeighborViews().

The laser scan can be indeed integrated with the dense point-cloud in several ways. If you have the raw LIDAR point-clouds (one point-cloud per scan), you can align them and use the position of the LIDAR as the position of the camera that sees the points. You add one view for each point, where the view is a new camera with only position provided (the rotation is not needed for mesh reconstruction). So for each LIDAR point-cloud, you append to the dense reconstruction the points with one view per point and the associated camera with known position.

An other solution is to use a different mesh reconstruction algorithm that does not use visibility constraints, like poisson reconstruction. However this approach is not robust to outliers (which is fine for the LIDAR scans, but could be a problem for the dense reconstruction if the input images are not well taken and there is not enough overlap for good outlier removal during dense matching).

Or, finally, you can try 3Dnovator, which has the two options integrated.

kimoktm commented 8 years ago

@cdcseacave After some digging I think the problem is related to Depthmap fusion and not to the reconstruction itself. Merging the filtered depthmaps in an external program shows no such noises, I also disabled other views influence on a point position (confidence normalization) and I got no noise however the result looked unfiltered (not using the filtered dmaps)

snapshot00 Merging of filtered depthmaps

cdcseacave commented 8 years ago

Could you please share with us what is the external fusion tool u tried and with what parameters?

kimoktm commented 8 years ago

@cdcseacave sorry I wasn't clear what I meant is overlaying and combining the pointclouds together in meshlab or cloudcompare not depthmaps fusion

cdcseacave commented 8 years ago

I'm sorry, I still do not understand, what point-clouds? Could you please detail what you did from the beginning till the end?

Just5D commented 8 years ago

just combine all depthmaps, no fusion

kimoktm commented 8 years ago

Yeah sure, I run densifypoint cloud with verbose level 3 where it exports a filtered depthmap and its correspondent ply. I combine these ply files together in cloud compare so no filtering or confidence measurement just merging them together. I hope that was helpful.

Here are some tweaks I tried in FuseDepthmaps function: 1- Removing the influence of other views from the confidence normalization, so each point position only depend on it's depth value at that view. The results didn't show the noise pattern however it there was some false positives which are similar to the unfiltered depthmaps which took me to the next test. 2- Disabling the filtering (minimum number of views to accept a point) and the confidence influence the results were indeed very similar to the unfiltered depthmaps. So are the filtered depthmaps used in the fusion process? As I couldn't find where they are saved

cdcseacave commented 8 years ago

Thanks, this is very helpful. There must be a bug then in FuseDepthmaps. I'll try it on my side.

The filtered depth-maps are stored over the unfiltered depth-maps, this is why you do not find them :)

cdcseacave commented 8 years ago

I am able to reproduce your results, so now the hard part starts: finding the problem :)

cdcseacave commented 8 years ago

solved, pls try the latest commit on master

kimoktm commented 8 years ago

@cdcseacave works great. Thanks a lot :+1: