batmanlab / Mammo-CLIP

Official Pytorch implementation of MICCAI 2024 paper (early accept, top 11%) Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography
https://shantanu-ai.github.io/projects/MICCAI-2024-Mammo-CLIP/
Creative Commons Attribution 4.0 International
23 stars 6 forks source link

GPU memory #3

Closed emrekeles-arch closed 3 months ago

emrekeles-arch commented 3 months ago

How much GPU memory is used during training and how long does training take?

shantanu-ai commented 3 months ago

We train it using 1 NVIDIA RTX6000 GPU for 10 epochs. It took 3 days for UPMC data (image-text). It took around 4.5 days for UPMC (image-text) + VinDr (image-labels) data. We are now migrating to distributed parallel to pre-train with bigger datasets.