microsoft / protein-sequence-models

Other
224 stars 28 forks source link

Confusing results taken from FLIP paper #17

Open Ieremie opened 1 year ago

Ieremie commented 1 year ago

In the paper it is mentioned that <<Values for models other than CARP-640M are taken from Dallago et al. (2021). >>), however the values reported do not seem to appear in any of the FLIP tables. What is even more confusing is that there are error bars coming from different random runs, while in the FLIP paper it seems the results are from a single run.

Another confusing part is the fine-tuning label in the results table. Does this mean that the language models are fine tuned on the tasks or only the head added on top. I am asking this as the FLIP paper mentions that the embedding models are kept frozen.

yangkky commented 1 year ago

"Fine-tune" means the entire pretrained model is finetuned on the task. "Freeze" means only the regression head is trained on the task.

On Tue, Apr 11, 2023 at 11:54 AM Ieremie Ioan @.***> wrote:

In the paper it is mentioned that <<Values for models other than CARP-640M are taken from Dallago et al. (2021). >>), however the values reported do not seem to appear in any of the FLIP tables. What is even more confusing is that there are error bars coming from different random runs, while in the FLIP paper it seems the results are from a single run.

Another confusing part is the fine-tuning label in the results table. Does this mean that the language models are fine tuned on the tasks or only the head added on top. I am asking this as the FLIP paper mentions that the embedding models are kept frozen.

— Reply to this email directly, view it on GitHub https://github.com/microsoft/protein-sequence-models/issues/17, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADEMNWFZVFUXQJDCJZVMKDDXAV5EVANCNFSM6AAAAAAW2O6WRY . You are receiving this because you are subscribed to this thread.Message ID: @.***>