westlake-repl / SaProt

Saprot: Protein Language Model with Structural Alphabet (AA+3Di)
MIT License
341 stars 32 forks source link

finetuning with MLM task #70

Open sj584 opened 1 day ago

sj584 commented 1 day ago

Awesome work :)

I am thinking about finetuning this model on specific protein domain with MLM task. As in, starting from the SaProt model weights, further finetuning on un-labeled dataset.

  1. When I run this code for MLM finetuning, python scripts/training.py -c config/Pretrain/saprot.yaml

I only need to change the load_pretrained: True ??

I also want to ask your opinion on this

  1. Given that model does MLM task on both structure token and sequence token, Would it be possible to use this model as kind of structure prediction? (sequence given -> structure token recovery -> 3D structure reconstruction) Or vise versa (i.e. sequence design given structure)

Thank you in advance for your comment!

sj584 commented 1 day ago

One more thing, Would it be possible to do PEFT (Parameter Efficient Fine-tuning) as well in this model finetuning?

LTEnjoy commented 1 day ago

Hi, thank you for your interest in our work and asking some intriguing questions!

When I run this code for MLM finetuning, python scripts/training.py -c config/Pretrain/saprot.yaml. I only need to change the load_pretrained: True ??

Yes. Setting load_pretrained to True enables you to load pretrained SaProt weight as a start point and you could further fine-tune your own model.

Given that model does MLM task on both structure token and sequence token, Would it be possible to use this model as kind of structure prediction? (sequence given -> structure token recovery -> 3D structure reconstruction) Or vise versa (i.e. sequence design given structure)

Interesting question! I think it depends on what kind of MLM task the model was pre-trained on. For SaProt, we didn't force it to predict structure tokens and only amino acid tokens were predicted to compute loss. We discussed this thing in our paper Appendix F. In this case SaProt may not be endowed the capability to do structure prediction. On the contrary, it indeed could do protein sequence design given a structure backbone, see our latest SaprotHub papar Fig 1.g.

Would it be possible to do PEFT (Parameter Efficient Fine-tuning) as well in this model finetuning?

Yes. If you check our code https://github.com/westlake-repl/SaProt/blob/main/model/saprot/base.py you would find we have already implemented LoRA technique for model fine-tuning. You could simply enable it by setting use_lora to True. image

Hope this could resolve your quesitons. Let me know if I could further help you :)

sj584 commented 1 day ago

Thanks again for your amazingly helpful and quick reply!