cdcseacave / openMVS

open Multi-View Stereo reconstruction library
http://cdcseacave.github.io
GNU Affero General Public License v3.0
3.18k stars 891 forks source link

Mesh generation from a point cloud external to OpenMVS processing #834

Open Livan89 opened 2 years ago

Livan89 commented 2 years ago

Hello @cdcseacave .

I have been trying to reconstruct the mesh of a dense cloud external to the OpenMVS generation process and I have not obtained a good result.

Let me explain, I have a cloud generated from a scanner (LiDAR), I have processed the point cloud in DensifyPointCloud to export it in a file with .mvs format, then I load the file in ReconstructMesh and try to generate the mesh (I understood that this was probably not possible since it didn't add the images and poses), the result was wrong.

I tried the process again, this time adding the images and their poses, adding to each 3D point of the dense cloud the images with their position in 3D space that see this point, still the result is wrong.

Do you propose any solution or recommendation to help in this process?

Thank you very much.

cdcseacave commented 2 years ago

pls send me the MVS file having cameras, point-cloud that you try to use with ReconstructMesh

Livan89 commented 2 years ago

@cdcseacave Thank you for your quick response, could you tell me how I could send it to you?

Livan89 commented 2 years ago

@cdcseacave I have shared it in the following link, please, once downloaded, I appreciate your comment to remove the link.

Thank you and waiting for your comment.

https://drive.google.com/file/d/1iYRo3TJ03gwyl26NAmVoRbtmxvvUNc3c/view?

cdcseacave commented 2 years ago

thx, I downloaded it, but it does not seem like a valid MVS file, I can not open it in Viewer pls make sure you use the interface well and export an MVS file with it: https://github.com/cdcseacave/openMVS/wiki/Interface

Livan89 commented 2 years ago

@cdcseacave I have implemented an auxiliary function in DensifyPointCloud there I load the scan files of a LiDAR equipment in csv format and add them to a dense cloud (scene.pointcloud). Before I load the spare.mvs with the intrinsic and extrinsic data of the cameras, I load it in scene.plataform . I save this data in mvs and ply format. The ply file is displayed correctly. The ReconstructMesh successfully loads the mvs file, then processes it, only the result doesn't make much sense. I don't know if I managed to explain myself correctly.

cdcseacave commented 2 years ago

for any debugging you should use latest OpenMVS code which is the develop branch; try again with that code and if you still have the problem pls send me the approapriate MVS together with the input images

Livan89 commented 2 years ago

I share with you the processing log of the cloud.mvs file in the ReconstructMesh. many faces leaves

11:52:48 [App ] Build date: Jun 16 2022, 09:53:32 11:52:48 [App ] CPU: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (12 cores) 11:52:48 [App ] RAM: 23.88GB Physical Memory 128.00TB Virtual Memory 11:52:48 [App ] OS: Windows 8 x64 11:52:48 [App ] SSE & AVX compatible CPU & OS detected 11:52:48 [App ] Command line: -i cloud.mvs -o mesh_reconstruct.mvs --smooth 5 --close-holes 40 11:52:48 [App ] CUDA device 0 initialized: NVIDIA GeForce GTX 1050 Ti (compute capability 6.1; memory 4.00GB) 11:52:50 [App ] Scene loaded (2s389ms): 30 images (30 calibrated) with a total of 120.00 MPixels (4.00 MPixels/image) 2437564 points, 0 vertices, 0 faces 11:53:01 [App ] error: reference image 9 has not enough images in view 11:53:02 [App ] error: reference image 15 has not enough images in view 11:53:13 [App ] error: reference image 10 has not enough images in view 11:53:15 [App ] error: reference image 16 has not enough images in view 11:53:15 [App ] error: reference image 13 has not enough images in view 11:53:22 [App ] error: reference image 11 has not enough images in view 11:53:22 [App ] error: reference image 8 has not enough images in view 11:53:23 [App ] error: reference image 14 has not enough images in view 11:53:46 [App ] Delaunay tetrahedralization completed: 2437564 points -> 2414211 vertices, 15205824 (+250) cells, 30411773 (+375) faces (20s397ms) 11:57:24 [App ] Delaunay tetrahedras weighting completed: 15206074 cells, 30412148 faces (3m37s643ms) 11:57:57 [App ] Delaunay tetrahedras graph-cut completed (1.05548e+07 flow): 1602452 vertices, 3424563 faces (33s578ms) 11:58:31 [App ] Mesh reconstruction completed: 1632115 vertices, 3225533 faces (5m7s335ms) 11:58:56 [App ] Cleaned mesh: 1287049 vertices, 2573413 faces (24s341ms) 11:59:06 [App ] Cleaned mesh: 1287066 vertices, 2573408 faces (10s173ms) 11:59:10 [App ] Cleaned mesh: 1287066 vertices, 2573408 faces (3s779ms) 11:59:12 [App ] Scene saved (1s696ms): 30 images (30 calibrated) 0 points, 1287066 vertices, 2573408 faces 11:59:13 [App ] Mesh saved: 1287066 vertices, 2573408 faces (1s37ms) 11:59:13 [App ] MEMORYINFO: { 11:59:13 [App ] PageFaultCount 4563570 11:59:13 [App ] PeakWorkingSetSize 6.30GB 11:59:13 [App ] WorkingSetSize 170.95MB 11:59:13 [App ] QuotaPeakPagedPoolUsage 265.88KB 11:59:13 [App ] QuotaPagedPoolUsage 265.88KB 11:59:13 [App ] QuotaPeakNonPagedPoolUsage 117.13KB 11:59:13 [App ] QuotaNonPagedPoolUsage 41.16KB 11:59:13 [App ] PagefileUsage 220.95MB 11:59:13 [App ] PeakPagefileUsage 6.71GB 11:59:13 [App ] } ENDINFO

Livan89 commented 2 years ago

@cdcseacave I attach several screenshots of the point cloud and the mesh obtained.

Let me explain a little more, you have all the experience with the ReconstructMesh pipeline and there is some detail that I am overlooking in the processing and meshing.

The LiDAR file I am working with encapsulates several capture scans of the same object. Each of these scans saves:

The problem when building the mesh is that despite the fact that the cameras are in the correct position (verified in cloud compare), the ReconstructMesh is not capable of generating the polygons correctly. However, if I try to generate the mesh with the 3D points in their relative position (without rotating and translating) the mesh is generated correctly, but of course, in this case each scan or fragment of the cloud comes out in a 3D space distant.

In the images I try to show in detail the problem.

do I make myself understood?

Process performed with translated and rotated 3D points. The position of the cameras is correct, pointing and looking at the point cloud: image image image image image

Process with the 3D points without being moved or rotated, it can be seen that they appear in their relative space: image image

hamlinzheng commented 1 year ago

@Livan89 Hello, do you want to use PointCloud which scaned by LiDAR as sparse cloud point, then import it and images to OpenMVS do some mesh process?