ToughStoneX / Self-Supervised-MVS

Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"
152 stars 15 forks source link

On PyTorch Version & Training Time #6

Closed II-Matto closed 3 years ago

II-Matto commented 3 years ago

Thanks for this great work! I have two quick questions:

  1. Will PyTorch 1.4.0 be OK for running this code? I notice that the recommended version is 1.1.0.
  2. How long will it take to train JDACS (w/o MS)? I notice that the README says training JDACS-MS can take several days with 4 GPUs. Is training JDACS less time-consuming?

BTW, is there any Python implementation of the evaluation code, which is currently implemented with Matlab?

Many thanks.

ToughStoneX commented 3 years ago

Hello,

  1. It is OK to run the code with PyTorch version over 1.1.0. The 1.1.0 is recommended according to the environment of my server.
  2. I remember that JDACS trained with 4 GPUs requires half a day on my server. Whereas JDACS-MS requires several days on 4 GPUs. It is because the backbones of JDACS and JDACS-MS are different. MVSNet is utilized in JDACS and CVP-MVSNet is used in JDACS-MS. The training time is related to the backbone.
  3. For evaluation, you can directly run the test.sh in JDACS-MS and the eval_dense.sh in JDACS. These scripts will generate the 3D models in a format of .ply. The provided Matlab code is from the DTU benchmark, which is used to assess the performance following their official benchmark.