Closed byte233 closed 2 months ago
Hi, thank you for your interest in our work!
We are sorry but It seems that 24G memory may be not enough to fine-tune 650M even with batch size set to 1. We recommend you set freeze_backbone=True
to only train the classification head or use 35M SaProt as an alternative.
Hope this could resolve your problem :)
Thanks!
Hi Sir,
How about the GPU memory cost when finetuning 650M SaProt model. I got OOM error when I try the finetuning script according to the command which the README file provides. My GPU hardware has 24G memory.