YijinHuang / SSiT

SSiT: Saliency-guided Self-supervised Image Transformer for Diabetic Retinopathy Grading
22 stars 4 forks source link

One single GPU #2

Closed Wadha-Almattar closed 1 year ago

Wadha-Almattar commented 1 year ago

I'm trying to reproduce your work. As mentioned in the README that I should have 64 GB for training ( 4 GPU). I am working on a single GPU - RTX 3080ti with 12 GB. Does the code support training the model on a single GPU ?

YijinHuang commented 1 year ago

While it is possible to train on a single GPU, it's important to note that SSL models typically require a large batch size (batch size of 512 requires at least 64GB GPU memory for SSiT). If using a smaller batch size, the model can still be trained on an RTX 3080ti, but the performance may be lower than reported results.