uber-research / DeepPruner

DeepPruner: Learning Efficient Stereo Matching via Differentiable PatchMatch (ICCV 2019)
Other
354 stars 41 forks source link

Question about runtime memory #20

Closed MJITG closed 4 years ago

MJITG commented 4 years ago

Hi, @ShivamDuggal4 , In your paper and README.md, you mentioned that your fast model consumed about 800MB Cuda memory. However, when I tested your model on my machine, I observed a memory consumption of 1300MB. Was there anything I missed? My test environment: PyTorch 1.1; torchvision 0.4.0; python 3.6; RTX 2080Ti; CUDA 10

ShivamDuggal4 commented 4 years ago

Hi @MJITG we cleaned and re-structured the code after paper submission for readability and potentially that could have cause the difference in the runtime memory in the paper and the current code version on GitHub. I didn't re-evaluated the runtime memory post that, but just checking it now, my memory consumption is around ~1230MB.

I think the paper version can be achieved again by re-organizing the code. Probably easiest way might be by doing some of the operations in-place rather than allocating variables.

MJITG commented 4 years ago

Get it. Thank you!