JiayuYANG / CVP-MVSNet

Cost Volume Pyramid Based Depth Inference for Multi-View Stereo (CVPR 2020 Oral)
240 stars 33 forks source link

Question about GPU memory usage #14

Closed etudemin closed 4 years ago

etudemin commented 4 years ago

Hi @JiayuYANG ,

Thanks for your great work!

When running the evaluation.sh with default setting, I found that the GPU memory usage usually ranges from 9900~11000 MB, and it is higher than 8795 MB (which is shown in Table 2 in the paper).

Are there possible reasons to cause higher GPU memory usage? Thank you very much.

JiayuYANG commented 4 years ago

Hi @etudemin

I think there are few reasons for this:

  1. By default pytorch would use cudnn auto-tuner to find the best algorithm for the task, which I believe might consume more GPU memory(if available) for a faster inference time. You might want to turn it off and also make pytorch deterministic to get the minimum memory usage. You can check this documentation for more details.
  2. You might want to clean the cuda cache before you record the memory usage by using torch.cuda.empty_cache() and torch.cuda.reset_max_memory_allocated().
  3. You might want to use the pytorch provided torch.cuda.max_memory_allocated() to record the maximum memory usage.

Cheers, Jiayu

etudemin commented 4 years ago

Hi @JiayuYANG

Thank you very much for the kind replies. I will try that, thanks again!

Sincerely, etudemin