cocoakang / photometric_feature_transformer

The official implementation of the paper Learning Efficient Photometric Feature Transform for Multi-view Stereo.
GNU General Public License v3.0
6 stars 3 forks source link

Evaluation issue #2

Closed shleecs closed 2 years ago

shleecs commented 2 years ago

Hi, nice work!

I have another question about the experiment in the code.

How can I infer the point cloud or mesh and compute completeness or precision in these codes?

I estimated the multi-view feature map using val_files/inference_dir/infer_dift_codes.bat, but I can't do further after this process.

I will be looking forward to your guidance.

Thank you!

Best regards, Sungho

cocoakang commented 2 years ago

Hi shleecs, You can use our modified multi-channel COLMAP to do dense matching, which generates depth maps and normal maps. This step uses the generated multi-view feature maps and our pre-generated sfm files (in undistort_feature_dift of each objects folder) as input. In detail, use these two orders:

colmap image_undistorter --image_path /path/to/feature/map/folder --input_path /path/to/undistort_feature_dif/ --output_path /path/to/undistort_feature_dif/ --input_type BIN

colmap patch_match_stereo --workspace_path /path/to/undistort_feature_dift --PatchMatchStereo.multi_channel 1 --PatchMatchStereo.geom_consistency 1 --PatchMatchStereo.sigma_spatial 15 --PatchMatchStereo.sigma_color 5.0 --PatchMatchStereo.num_samples 20 --PatchMatchStereo.ncc_sigma 1.0

The first one undistorts feature maps and the second one is for dense reconstruction. After that you can find generated depth maps and normal maps with COLMAP GUI by opening the undistort_feature_dift folder in dense window. You can use fusion order in COLMAP to fuse the inferred depth maps into point clouds and use screened poison algorithm to further get mesh.

For Accuracy/Completeness, you can use this tool, or use our script in val_files/fusion_transfer_2_gt_cmp.py