cdcseacave / openMVS

open Multi-View Stereo reconstruction library
http://cdcseacave.github.io
GNU Affero General Public License v3.0
3.21k stars 895 forks source link

Problem with using Polycam interface #1071

Open Weihaooooooo opened 9 months ago

Weihaooooooo commented 9 months ago

Describe the bug I was trying to convert the data captured with Polycam. Using InterfacePolycam pointing towards the keyframes folder and expecting an output of a .mvs file. (ie InterfacePolycam -i ../keyframes/ -o scene.mvs)

However I find no output and getting free(): invalid size; Aborted (core dumped) at the end of the log.

image

I tried using InterfacePolycam without specifying the output name (ie. InterfacePolycam -i ../keyframes/). Same logs were produced. But I got a 90KB scene.mvs file which was unusable.

cdcseacave commented 9 months ago

can u pls share the data? it only woks on scenes captured in LIDAR mode

Weihaooooooo commented 9 months ago

Sure, the data was captured in lidar mode. The data is accessible here

cdcseacave commented 9 months ago

I've tested your data and works fine. The only issue is that the corrected data is incomplete, so the raw version of the images is selected, which is not accurate. I'll try to find some time to make the importer work with incomplete corrected versions as well

Weihaooooooo commented 9 months ago

Thanks so much for your reply!!

In the folder, there are corrected_images and corrected_camera. Maybe it is correct_depth that is missing?

I exported the data from polycam again and tried the same command. The error still exists. Were you able to generate a readable mvs file?

These are all the files I got from the raw export of the polycam app. If you don't mind me asking a simple question -- do I need to correct the images by myself or is it another problem suing the app?

cdcseacave commented 9 months ago

not sure about how polycam works in exporting the data, but yes, normally I'd expect corrected_depths folder as well so for me it works with the data you share, here is the MVS scene I've got scene.zip

Weihaooooooo commented 9 months ago

Thanks again for the prompt reply.

I have got the same scene file as mentioned in the original post, which was around 90KB and unreadable by DensifyPointCloud.

I downloaded the file you shared and tried processing it with DensifyPointCloud. Unfortunately the same issue arose.

image

Are you able to generate a point cloud from it? If so, it might be some other problems in my setup.

cdcseacave commented 9 months ago

yes, it works for me are you sure you are using the latest develop branch? can you pls double-check and recompile everything?

Weihaooooooo commented 8 months ago

yes, it works for me are you sure you are using the latest develop branch? can you pls double-check and recompile everything?

Thanks for your reply. I was using the main branch. I will try to recompile with the develop branch and get back later.

trv-rscanlo2 commented 8 months ago

Hello. We are in the process of evaluating OpenMVS for use in our own project. Our process is similar to what is described here. As C++ is not our preferred language of choice, to us, it makes the most sense to try to get our data into a format that is accepted by one of your Interface executables and make that our entry point into your pipeline. The easiest way to test this approach is by making Polycam projects and trying to use the Polycam interface. However, we are running into trouble. Using the Polycam project that was posted in the thread above we are able to get through all of the steps, including texturing and transform to output the scene as an OBJ. However, the output does not represent the intended subject in the least. When I try the pipeline against my own Polycam project, it fails to properly perform the dense mesh reconstruction. I am wondering if you have a Polycam project that you have sucessfully tested that you can share here?

There are a few problems as I see it. As far as I can tell, PolyCam does not provide the point cloud? So OpenMVS relies solely on the depth maps to generate a sparse point cloud, which only shows each point in a single view? This is a fundamental problem I have with our actual capture approach. My device has a depth camera and an RGB camera. I am creating a point cloud in real time based on the depth image and known position and orientation from the onboard 6DoF system. At this point, I should not need the SFM capabilities of OpenMVG. I should have everything I need, right? I've got a point cloud, of which I can control the density, a set of RGB images, a set of depth images that correspond to the RGB images and a known camera pose for each image. The only issue I see is that all of the interfaces provided here (PolyCam excluded) want a list of images for each point in my cloud in which that point appears. But this can really only be obtained as an artifact of SFM. Each point in my point cloud would only have one image associated with it (the image I used to capture that point). In order to capture each image in which a point could occur, I would have to go through a very intensive process of ray casting every pixel in every depth image against every point in my point cloud. Is there a practical way around this pairing of points to images? I'd love to have a discussion about this, possibly outside of a public forum. Thanks

cdcseacave commented 7 months ago

I tested a lot of polycam scenes, all worked fine. I've just pushed a small change to the interface that allows to import scenes that do not have depth maps. Here is an example of a scene: https://filetransfer.io/data-package/tv34EYTR#link

OpenMVS densifiaction does use the SfM point cloud with view information to improve the accuracy, but good results can be obtained in general even without that, just having as input images and the corresponding camera poses. Most important in both cases is the camera poses to be accurate. If there is not sparse point cloud as input, you can increase the number of neighbors during densification to get more accurate reconstruction:

DensifyPointCloud.exe scene.mvs --number-views 32
trv-rscanlo2 commented 7 months ago

Thanks for getting back to me. Ideally, I would like to provide a point cloud (because I have a good one) as a PLY file an an optional addition to the Polycam interface. I'm not a C++ programmer. Creating this additional interface on my own would be a difficult task. My question is, does supplying a point cloud, but not supplying multiple views for each point help anything? Or will a point cloud be mostly ignored if I don't have multple views per point?

cdcseacave commented 7 months ago

do you have normal per point?

trv-rscanlo2 commented 7 months ago

My points are being generated from depth images captured from a known position and orientation. So I have the normal of the camera position. I don’t think I have the normal of the point, unless that can be calculated from that info. The depth camera also gives me the confidence for each point.

trv-rscanlo2 commented 7 months ago

I have attempted to process the scene you provided. Here is my planned steps of execution. Please let me know if you see problems with my command line arguments. I know there are a lot of different args that can be tweaked, but I haven't touched most of them because I would be taking shots in the dark.

InterfacePolycam -i ../Statu-poly/keyframes/ DensifyPointCloud -i scene.mvs -o dense_scene.mvs -v 99 --number-views 32 ReconstructMesh -i dense_scene.mvs -o reconstruct_mesh.mvs -v 99 RefineMesh -i reconstruct_mesh.mvs -o refine_mesh.mvs -v 99 --max-face-area 16 TextureMesh -i refine_mesh.mvs -o textured_mesh.mvs -v 99 TransformScene -i textured_mesh.mvs -o transformed_mesh.mvs --export-type obj -v 99 --transfer-texture-file textured_mesh.ply

It starts to run into problems with DensifyPointCloud. All of the outputs look like it was unable to create more dense points. When I run ReconstructMesh, it fails completely with. "error: failed loading image header". I have tried running DensifyPointCloud with and without the --number-views argument. I am running off of a build from the develop branch that was cloned midweek, last week. This is running on a docker container on an Ubuntu machine with CUDA enabled and a GTX 1080 gpu.

Are you doing something very different when processing your scene from Polycam?

cdcseacave commented 7 months ago

that looks good, it should work if the input if ok

as for the point cloud, there is no reason to extract it yourself from the depth maps, that is exactly what DensifyPointCloud does if the DMAP already exist, it will just fuse the depth-maps into a point cloud, but also annotating/estimating the view and normal information

you only need to be careful the depth maps are at the resolution requested from DensifyPointCloud , which by default is half the resolution of the image; if your depth-map are dufferent resolution, set it by hand, ex:

DensifyPointCloud scene.mvs --resolution-level 0 --max-resolution 600

where 600 is the max(width, height) of the depth-map resolution

trv-rscanlo2 commented 7 months ago

Ok, your point about not extracting a point cloud in favor or using the depth maps makes sense. I won't attempt to shoehorn a point cloud into the Polycam interface.

I am a little confused about the resolution settings for DensifyPointCloud. In the Polycam example you supplied, the images are 1024x768 and the depth images are 256x192. This is less than half the resolution of the image. In order to process this scene, do I need to use the --resolution-level and --max-resolution args? If so, what values should I pass in given the resolution of the images and depth images? Is this why the scene is failing to process with the list of commands I gave earlier?

cdcseacave commented 7 months ago

for that case use

DensifyPointCloud scene.mvs --resolution-level 0 --max-resolution 256
trv-rscanlo2 commented 7 months ago

Ok. That worked! Thanks for the help. I may have a few more questions as we get our project output into this format. But I'll let you know.

trv-rscanlo2 commented 7 months ago

Do you have any recommendations in the CLI args to optimize the processing for spaces, as opposed to centralized objects?

trv-rscanlo2 commented 7 months ago

Hi. Not sure if you saw my last comment. Do you have any recommendations in the CLI args to optimize the processing for spaces, as opposed to centralized objects?

cdcseacave commented 7 months ago

the pipeline should be general, no special cases or params