Closed agemagician closed 3 years ago
Hi Ahmed! Just looking back at this issue, with the latest updates to the repo I think we have now a good comparison to the available ProTrans models, right? I'll close this for now - let us know when you release newer models!
Thanks a lot @tomsercu . Yes, sure, we have 3 new models that we haven't released yet :) We will publish them earily next year. I will ping you then.
Happy new year and congratulations again for your awesome work.
Sounds good, thanks - also happy holidays to you and your fantastic team!
Hi, I was wondering if it would be possible to include the performance of ProtT5-XL-UniRef50 in the model comparison, as it seems to perform well on all tasks tested by the authors of ProtTrans. I would be really interested in seeing how it stacks up to ESM-1b which was trained on the same dataset.
Hi,
Congratulations on your great work and for releasing the pertained models.
In your work you compared your results against SeqVec, our early work, and I was wondering if you have plans to compare it to our new work ProtTrans ? https://github.com/agemagician/ProtTrans
By making a quick comparison, it seems ProtBert-BFD model performs better than Transformer-34 on SS8 with only 63% of Transformer-34 capacity.
I believe more analysis is needed here, especially, to see how Roberta style models compared to other transformers including XLNet, Albert, Bert, Electra, Transformer XL, etc.