ardaduz / deep-video-mvs

Code for "DeepVideoMVS: Multi-View Stereo on Video with Recurrent Spatio-Temporal Fusion" (CVPR 2021)
MIT License
218 stars 29 forks source link

Can you provide the meshes of your method and the baseline methods on ScanNet and 7Scenes? #21

Closed NoOneUST closed 2 years ago

NoOneUST commented 2 years ago

Can you provide the meshes of your method and the baseline methods on ScanNet and 7Scenes? Thank you so much!

ardaduz commented 2 years ago

I do not have them stored anymore. But once you have the input data and predictions structured like this: https://github.com/ardaduz/deep-video-mvs/tree/master/sample-data, you can produce the meshes using https://github.com/ardaduz/deep-video-mvs/blob/master/sample-data/run-tsdf-reconstruction.py. Some instructions are here: https://github.com/ardaduz/deep-video-mvs#tsdf-reconstructions.

Everything you need to get the meshes is in this repository:

Let me know if something is not clear.

NoOneUST commented 2 years ago

Hi, thanks so much for your reply. Did you test other metics on ScanNet and 7Scenes? For example, absolute relative error, absolute difference, and squared relative loss? 1652717453(1)

ardaduz commented 2 years ago

Please check the paper.

We have Abs Rel, Abs Diff and delta < 1.25 metrics. Also, we evaluate Abs Inv, since that is our loss function.