thomashopkins32 / HuBMAP

Hacking the Human Vasculature (Kaggle Competition)
Apache License 2.0
0 stars 0 forks source link

Figure out optimal memory usage on Kaggle GPUs #15

Closed thomashopkins32 closed 1 year ago

thomashopkins32 commented 1 year ago

Kaggle uses NVIDIA Tesla P100 GPUs which have 16 (or 12?) GB of dedicated memory. Testing locally using my 3070 which has 8 GB, we can run a batch size of 4 using full precision and a batch size of 8 using mixed precision. We should test how many samples we can fit in a batch using 16 GB of memory.

My guess would be in the range of 16-20 samples for mixed precision but maybe more?