ToughStoneX / Self-Supervised-MVS

Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"
152 stars 15 forks source link

About the dataset used for evaluation #17

Open knightwzh opened 2 years ago

knightwzh commented 2 years ago

Hi, sorry to disturb you. But when I reproduce the quantitative performance of my own model only with the standard loss, I find there will be a large difference using evaluation dataset you provided or the origin evaluation dataset from dtu_yao.py Specifically, when I use the depth maps from dtu_yao.py(using test list) to get point clouds by fusion.py, the number of the points will be dramatically low(about 1 or 2 million), thus make the acc. and comp. too high(about 3.3). But using your evaluation dataset can produce much better result I'm wondering where the reason lies in.