Closed SamMaoYS closed 2 years ago
Hi @SamMaoYS ,
Very cool work! It would be good to setup a confcall to discuss about it in an interactive session. Can you contact me at fabien.castan@mikrosimage.com?
If the depth maps are good but the fusion is not, it may be a problem with the accuracy of the cameras estimation. You can try to make the SfM in Meshroom from the source images, then use the SfMAlignment node to align this result to the sfmData you have created from the iPad information. And then launch the Meshing on that using the precomputed depth maps. So you will be in the right scale to use the depthmaps from the sensor, but the cameras alignment will be based on the source images (and not from the realtime image+imu fusion).
You can also try to see the impact if you initialize the SfM node from your initial sfmData (in all case you need the SfMAlignment node to realign at the end).
In the advanced options of the Meshing, you can also enable the "Save Raw Dense Point Cloud" to see the fused point cloud before the min-cut.
Thank you @fabiencastan, I followed your suggestions and use SfMAlignment node. I aligned the meshroom estimated SFM from source images to the SFM created with known camera poses. Following is a comparison between two SFM files. The highlighted one is after alignment, the dark one is the SFM from source images.
As you can see, the scale with known camera poses is larger. I use this aligned SFM (larger one) to perform the reconstruction without sensor depth maps (All depth maps are estimated by meshroom), this can mitigate the impact of inaccurate depths. During the DepthMaps node, I choose different downscale factors 1 (dimension 1920x1440) and 2 (dimension 960x720). The meshing results are shown in the following. Downscale 1
Downscale 2
WIth downscale 2, the reconstruction tends to have more holes. So the scale of the object space and the scale of the depth maps actually will affect the reconstruction result? I will investigate more on this with more scenes, and check if with sensor depth this situation still exists or not. My first impression is that the sensor depth maps are quite dense, this could less be an issue. But maybe I am wrong.
Hi @SamMaoYS, I want to do the same but using Tof from Honor View 20. I was able to import thes AR poses in Meshroom but how do you import the depth maps? Do you create the _depthMap.exr files yourself? What about the _simMap.exr ? Can you share your pipeline ?
To be "viewable" I create 16bits grayscale depth png (1000<=>1m) : https://github.com/remmel/rgbd-dataset/tree/main/2021-04-10_114859_pcchessboard
Related issue : https://github.com/alicevision/meshroom/issues/764
Hi @remmel, I created the _depthMap.exr by replacing the depth values of the Meshroom estimated depth maps. I use readImage
and writeImage
functions in AliceVision/src/aliceVision/mvsData/imageIO.hpp
to perform the depth value replacement. In my understanding, the _simMap.exr is used to score the depth values, and it is optional.
To share my first accomplishement using Honor 20 View tof sensor instead of calculated exr depthMaps. I m calculating by hand the scale between 2 worlds (measuring here the width of my europe map) My next steps will be to fine tune the scale, test differents intrinsics and to improve calculated depthmaps instead of remplacing it by tof depthmaps
I'm now enhancing calculated exr depthmaps exr with tof depthmaps. I might add a new node in Meshroom, in the meantime, code can be found https://github.com/remmel/image-processing-js/blob/master/tools/depthMapsExr.py to calculated the ratio size between meshroom and world, I make sure that 1st picture has in its center an image which has both features and 100% Tof confidence, and later compare depth or centered pixel (w/2,h/2). I do not try to import anymore AREngine poses, as thoses poses are not perfect neither (in a test, the mesh generated importing poses with worst than using the default pipeline)
I've added new node DepthMapImport
to handle that direclty in meshroom
https://github.com/remmel/meshroom/tree/dev_importTofDepthMap
Pipeline : Meshing.estimateSpaceFromSfm must be uncheck to really see the difference, otherwise the bounding box is too small
As asked by @MAdelElbayumi link of the rgbd photos : https://drive.google.com/drive/folders/1XHYL4QhIbeR6jJrxam18jot5ovs0QLR6?usp=sharing (note that my script is taking stupidly the first photo in camera.sfm to calculate the ratio. That first one is the first photo displayed in the UI. For that dataset, I had the change the camera.sfm to make sure that the first one is 00000455_image.jpg as I want to calcualte the ratio on that one; otherwise use calculated ratio:3.36378 to have same as mine). Photo with fixed focus (thats why the quality is also not so great) for that test
@SamMaoYS, How did you plot your depth sensor data as a scatter plot? can you provide any code/information?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue is closed due to inactivity. Feel free to re-open if new information is available.
I'm now enhancing calculated exr depthmaps exr with tof depthmaps. I might add a new node in Meshroom, in the meantime, code can be found https://github.com/remmel/image-processing-js/blob/master/tools/depthMapsExr.py to calculated the ratio size between meshroom and world, I make sure that 1st picture has in its center an image which has both features and 100% Tof confidence, and later compare depth or centered pixel (w/2,h/2). I do not try to import anymore AREngine poses, as thoses poses are not perfect neither (in a test, the mesh generated importing poses with worst than using the default pipeline)
The link is broken
@VihangMhatre all the usefull code has been moved in that file https://github.com/remmel/meshroom/blob/425b64c1a3976a0ab01a02171e61d7484857b653/meshroom/nodes/aliceVision/DepthMapImport.py (to be used as a meshroom node)
@VihangMhatre all the usefull code has been moved in that file https://github.com/remmel/meshroom/blob/425b64c1a3976a0ab01a02171e61d7484857b653/meshroom/nodes/aliceVision/DepthMapImport.py (to be used as a meshroom node)
Thanks! Any tutorial on how to build this? WIll following this https://github.com/remmel/meshroom/blob/dev_importTofDepthMap/INSTALL.md and then cloning the repository that you mentioned work? Thanks!
If I remember, you can only copy paste the DepthMapImport.py file in your installed release version of meshroom
If I remember, you can only copy paste the DepthMapImport.py file in your installed release version of meshroom
This does not work. I get the same warning that another user got in this thread https://github.com/alicevision/meshroom/issues/1493 cv2 warning and cannot see node.
This workflow may work: Download the source code from the releases, add the DepthMapImport.py to the nodes folder, add opencv-python and numpy to the requirements and make sure at least python 3.8 is installed (otherwise there will be an issue with os.add_dll_directory). Build meshroom. Then copy and replace the new files over to the existing meshroom binaries folder.
Describe the problem Hi, thanks to the wiki and previously solved issues, I am able to reconstruct scenes with known camera poses. Since it is hard to predict the depth values in the untextured areas, I want to further improve the result by utilizing depth maps from a depth sensor. I manage to do it by replacing the depth values of the filtered depth maps in node DepthMapsFiltering node with the depth values from the sensor. Since I also reconstruct with known camera poses, the depth values should be correct. Overall, this process works as expected, but the reconstructed mesh has some significant artifacts. Holes, bumpy surfaces. What I know is during the meshing, Delaunay tetrahedralization is first used to do the volume discretization, then weights are assigned to the vertices, finally a minimal s-t graph cut to select the surfaces. Screenshots Projected dense point cloud from all depth maps (depth value from the sensor).
Meshing result from the above point cloud.
Projected dense point cloud from all depth maps (depth value estimated by meshroom pipeline)
Meshing result from the above point cloud.
Dataset The images (RGB + depth + camera poses) are collected from an iPad, using Apple ARKit framework. The images are decoded from a video stream, so they are blurry. The meshing process, however, does not really depend on triangular feature points, so this should not be a big problem.
Desktop (please complete the following and other pertinent information):
Question My question is what could be the reason that with depth values from the depth sensor, the reconstructed meshes have many holes. One thing to add is that the depth maps from the sensor is not very accurate. But is much better comparing to the depths estimated by meshroom in untextured areas. I can tweak the parameters of Nb Pixel Size Behind, Full Weight to fill those holes, but result in less details in the geometry.
Depth comparison (Left, meshroom estimated depth; Right, depth from sensor) Textured object Limited Textures
Bumpy surfaces when I try to reconstruct with sensor depths (reconstruction of a laundry room)
Is it because of the projected point cloud is too dense? I notice the point cloud is augmented with silhouette points. If the point cloud is too dense, will it influence the accuracy of the s-t graph cut? Overall, I am trying to find the reason why the reconstructed meshes is less complete, less smooth on the surfaces. If you need more information, please let me know. I will try to clarify as many questions as possible. Thank you!