Closed kimoktm closed 8 years ago
Did the sparse reconstruction complete fine? Can you share it as well?
@cdcseacave thanks alot for the software and the quick reply. Here is a screenshot of the sparse clouds
And here are the point clouds https://drive.google.com/open?id=0B1HwiB5wWmuUM3NEOTJ3dkg0Q2c
Did you tried the reconstruct mesh step in order to see if the noise can be removed?
@pmoulon I tried running Reconstructmesh but I got a segmentation error
09:19:42 [App ] Command line: scene_dense.mvs
09:19:43 [App ] Scene loaded (1s287ms):
61 images (61 calibrated) with a total of 18.14 MPixels (0.30 MPixels/image)
1404349 points, 0 vertices, 0 faces
Segmentation fault (core dumped)
I've uploaded mvs files (scene and scene_dense) as well for analysis https://drive.google.com/open?id=0B1HwiB5wWmuUM3NEOTJ3dkg0Q2c
What I'm currently testing is running openMVG sequential pipeline on the undistorted_images to check if the problem is related to fisheye model or not.
@ORNis The dataset was shooted by @pyp22 => 73 shoots, Canon 7D MKII with Samyang 8mm f/3.5 UMC Fish-eye CS II Lens
ok, thanks. I will try it on my side if current @kimoktm's experimentation is not conclusive.
It just run fine here:
@ORNis thanks for the test. I ran openMVG_main_ComputeFeatures -m AKAZE_FLOAT -p HIGH
with AKAZE_FLOAT features. The scene RMSE : 0.536442
@ORNis I'm sorry to bother you but can you share your sfm_data file so I can test weather the problem is in MVS or SFM
No pb https://we.tl/iPNwkt4FZe I use SIFT detector/descriptor in HIGH mode + ANNL2 matcher. RMSE was approx 0.2. Anyway, steps of rotations are too big between two images localized at each angle of the building, so images are sometimes poorly connected.
Indeed, I am able to reproduce your problem @kimoktm. However I am not sure what is causing it (actually I have no even the smallest idea). This is how the looks like. The noisy points in front were removed successfully, but the points behind the wall not, creating false cavities in the mesh.
Now I am not sure if the problem is just noise (false dense points generated due to no image overlap, etc) or really a bug in the algorithm that creates false points. However, the noisy points behind the wall are not removed during mesh reconstruction cause there are no points in front to mask them.
@cdcseacave that's interesting I hope it's an easy fix. Here are some tests I did unfortunately all of them resulted in the same problem:
@ORNis what version & options of openMVS are you using?
Hi,everyone,i have met the same problem before. Maybe the baseline of two frames is too small in the DensifyPointCloud step. I solve the same problem by limitting the distantance of two frames, if the distantance is too small, i will discard the depthmap of the frame. but sometimes it still happens.
you can see that the noises point to a center of a frame.
Thx, these kind of experiments help. How did you limit the selected depthmaps? The best way to do this is to do it before depth map computation.
This is how this works: -from the SFM solution all visible neighbor images are selected for each image and scored; the score is a combination of baseline distance and resolution. -in the densification module, the best n neighbor views are selected for each image in order to compute the depth-map
You can play with how the score is computed for the neighbor images in SelectNeighborViews() in Scene.cpp. You can also play with how big n is. Also, you can play with how many views need to agree in order to accept a point during fusion time. The pipeline was never tested before on wide-lenses scenes, so some of these values may not be optimal.
@cdcseacave thx for your reply and your code. My test data come from panorama(ladybug 5). Maybe the camera model is the core of the problem.
True, my guess as well. Could you share some datasets with this camera for testing?
@kafeiyin00 Which tool did you used for SfM? Regarding the point cloud: On your images we see that the noisy point on straight lines starting from the same camera poses. Perhaps it comes from camera pairs that comes from the same poses must be be chosen for creating depth (because their baseline is very small or 0). BTW? if you reconstruct a mesh, they should disappear.
@pmoulon I use the openMVG for SFM,thx a lot :-) I seprate(or reproject) the panoroma into several frame with same center, and I set limitation that frames come from same panoroma will not create a pair.
@kafeiyin00 Thx. since the distortion was removed when the panorama have been stitched,I think you have used the camera model 1 (Classic Pinhole with no distortion), BTW, if you have used Camera Model 3 (Pinhole radial 3) the distortion coefficient must be very small).
@pmoulon thank you for your advice. i used the fisheye camera model because the ladybug5 have 6 fish eye cameras. i will test remove the distortion first, and use camera model 1
@cdcseacave here some test data
@kafeiyin00 could you please detail also the process you currently use to arrive to the noisy point-cloud: applications used, command lines, splitting panoramas, etc?
@cdcseacave i will update my code to github soon :-)
Perhaps it will be better to move the @kafeiyin00 case to a new issue, in order to do not perturb the initial question from @kimoktm
@cdcseacave @pmoulon BTW,i am a master at wuhan university and i am interested in sfm and mvs, could you please recommend some basic paper or book about sfm or mvs
@kafeiyin00 I would recommend you to take a look to this list: https://github.com/openMVG/awesome_3DReconstruction_list And also => http://szeliski.org/Book/ => http://www.cse.wustl.edu/~furukawa/papers/fnt_mvs.pdf
@pmoulon thx a lot
@kimoktm, I'm using the latest openMVS sources and I made the dense depth map computation at resolution level 3.
@kafeiyin00 I think we should start a special thread for your issue.
BTW, here some tests in order to split your spherical images in some pinhole images.
Here your image + some the 5 simulated pinhole camera view frustum footprint
Then I have made a resampling in order to create 5 pinhole images from the spherical image. I can release the code if you want (you can ask as many as image you want around the X direction).
BTW:
Great work, @pmoulon! Did you test the dense reconstruction too with this method of extracted images to see if this corrects the noisy point-cloud?
@pmoulon What is BTW ?
by the way
@pmoulon i did not test resampling using the fisheye model, i used the Classic Pinhole with no distortion, can you release your resampling code, and i will test the dense reconstruction using the fisheye modle
@Just5D Thanks!
@kafeiyin00 There is a misunderstanding: My tool resample the panorama to X Pinhole images (see here https://github.com/openMVG/openMVG/issues/621 for more info). I was proposing to use the raw ladybug FishEye images (I don't know if you have them or not)
I also think the best results could be obtained by using the raw fish-eye images, not pre-processing them in any way. The SfM pipeline can exploit all info contained by the raw images. Please share them if you can.
After some tests I was able to avoid getting these noises by changing the baseline as suggested, however, the values are heavily related to the scene (poses, intrinsics) and often results in losing good segments of the scene. So @cdcseacave can you point where would be a good place to start looking where the problem is?
A side question: I'm trying to merge point clouds from different sources (MVS and Laser scans) can I run reconstruct mesh from the combined clouds as I can see that the algorithm requires the views at each point which isn't available from the laser scans?
You can start by playing with the way the weights are computed for the neighbor views in SelectNeighborViews().
The laser scan can be indeed integrated with the dense point-cloud in several ways. If you have the raw LIDAR point-clouds (one point-cloud per scan), you can align them and use the position of the LIDAR as the position of the camera that sees the points. You add one view for each point, where the view is a new camera with only position provided (the rotation is not needed for mesh reconstruction). So for each LIDAR point-cloud, you append to the dense reconstruction the points with one view per point and the associated camera with known position.
An other solution is to use a different mesh reconstruction algorithm that does not use visibility constraints, like poisson reconstruction. However this approach is not robust to outliers (which is fine for the LIDAR scans, but could be a problem for the dense reconstruction if the input images are not well taken and there is not enough overlap for good outlier removal during dense matching).
Or, finally, you can try 3Dnovator, which has the two options integrated.
@cdcseacave After some digging I think the problem is related to Depthmap fusion and not to the reconstruction itself. Merging the filtered depthmaps in an external program shows no such noises, I also disabled other views influence on a point position (confidence normalization) and I got no noise however the result looked unfiltered (not using the filtered dmaps)
Merging of filtered depthmaps
Could you please share with us what is the external fusion tool u tried and with what parameters?
@cdcseacave sorry I wasn't clear what I meant is overlaying and combining the pointclouds together in meshlab or cloudcompare not depthmaps fusion
I'm sorry, I still do not understand, what point-clouds? Could you please detail what you did from the beginning till the end?
just combine all depthmaps, no fusion
Yeah sure, I run densifypoint cloud with verbose level 3 where it exports a filtered depthmap and its correspondent ply. I combine these ply files together in cloud compare so no filtering or confidence measurement just merging them together. I hope that was helpful.
Here are some tweaks I tried in FuseDepthmaps function: 1- Removing the influence of other views from the confidence normalization, so each point position only depend on it's depth value at that view. The results didn't show the noise pattern however it there was some false positives which are similar to the unfiltered depthmaps which took me to the next test. 2- Disabling the filtering (minimum number of views to accept a point) and the confidence influence the results were indeed very similar to the unfiltered depthmaps. So are the filtered depthmaps used in the fusion process? As I couldn't find where they are saved
Thanks, this is very helpful. There must be a bug then in FuseDepthmaps. I'll try it on my side.
The filtered depth-maps are stored over the unfiltered depth-maps, this is why you do not find them :)
I am able to reproduce your results, so now the hard part starts: finding the problem :)
solved, pls try the latest commit on master
@cdcseacave works great. Thanks a lot :+1:
Noisy point clouds when running DensifyPointCloud on a scene with fisheye lens. Tested on multiple scenes and all ends up with a noisy pattern.
Specifications like the version of the project, operating system, and hardware
Steps to reproduce the problem
openMVG_main_openMVG2openMVS -i sfm_data.bin -o scene.mvs
DensifyPointCloud scene.mvs --resolution-level 4