suinleelab / MONET

Transparent medical image AI via an image–text foundation model grounded in medical literature
Other
44 stars 4 forks source link

Training cost #4

Open Chrisleowoo opened 4 months ago

Chrisleowoo commented 4 months ago

Hi, the authors! Thanks for bringing this towering work, I'm curious about the training cost of the whole pipeline. Have you ever trained using CLIP?

chanwkimlab commented 4 months ago

Hi, thanks for your interest in our work! We used 6 Nvidia A40 GPUs, and the model training took 1h and 40 mins, as noted in the paper. It is hard to provide the training cost exactly, because we used the University of Washington GPU cluster. But you might be able to get a rough cost estimate using the rate by other GPU cloud providers.