AlexanderKroll / ESP

MIT License
67 stars 23 forks source link

Sequence length above maximum and Missing key(s) in state_dict #21

Closed DominicDin closed 2 months ago

DominicDin commented 7 months ago

Hi Alex & community, thank you for your great work

When I run the training_ESM1b_taskspecific.py script, how should I modify the code to run it properly? The datasets, named 'train_data_ESM_training.pkl' and 'validation_data_ESM_training.pkl', were directly downloaded from the repository. Subsequently, I converted them into Pickle protocol 4 for running properly. image

Additionally, I am trying to train the ESM2 model with the same 1280 dimensions, but I am encountering an error in FullModel as shown below. Do you have any suggestions? Since I am loading the ESM model locally and not downloading it online, do I still need to use the alphabet to load the 'contact-regression.pt' file? image

Thanks for any hints Best regards, Domi

AlexanderKroll commented 2 months ago

Hi Domi,

sorry for my very late response. You need to make sure that all sequences have a maximum of 1022 amino acids, bcause ESM-1b cannot process longer sequences. So if your input contains longer sequences you need to truncated these after the first 1022 amino acids.

For the second issue, I am not sure what is going on without more context. But it looks as if you are tryining to set the parameters of the ESM-2 with the models of my trained ESM-1b model, right? Since both models are different this does not work.