Closed talbrend closed 5 months ago
@talbrend It's a form of transfer learning. Training a model on one task (protein language modeling + GO annotation prediction) makes it better at other distinct but related tasks (whatever task it's fine-tuned on).
Hi guys, thanks for this great work :) I see that your fine tuning examples (in the appended notebook) don't have GO annotations to go together with the amino acids sequences. Can you explain how they were incorporated in the training phase while apparently not used during fine tuning? Also, I have finetuned the model on my own task and it had quite good results without the GO annotations... Please clarify this issue.