Closed MJITG closed 4 years ago
Hi @MJITG we cleaned and re-structured the code after paper submission for readability and potentially that could have cause the difference in the runtime memory in the paper and the current code version on GitHub. I didn't re-evaluated the runtime memory post that, but just checking it now, my memory consumption is around ~1230MB.
I think the paper version can be achieved again by re-organizing the code. Probably easiest way might be by doing some of the operations in-place rather than allocating variables.
Get it. Thank you!
Hi, @ShivamDuggal4 , In your paper and README.md, you mentioned that your fast model consumed about 800MB Cuda memory. However, when I tested your model on my machine, I observed a memory consumption of 1300MB. Was there anything I missed? My test environment: PyTorch 1.1; torchvision 0.4.0; python 3.6; RTX 2080Ti; CUDA 10