rmaphoh / RETFound_MAE

RETFound - A foundation model for retinal image
Other
311 stars 63 forks source link

Requires grad parameters to finetune a downstream classification task #14

Closed rgbarriadaphd closed 7 months ago

rgbarriadaphd commented 8 months ago

Hello,

First of all thanks for sharing this great framework.

I was wondering which is the best approach to finetune the model from the pretrained weights for a binary classification task. Should we compute all parameters gradients (requires_grad=True)? or should we freeze some of the parts of the model?

Thanks in advance, Regards,

rmaphoh commented 8 months ago

Thanks. It is reasonable to test the performance in both settings.

From our experience, unfreezing all the parameters will lead to better performance in general.