ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models.
Was ProtT5 trained to predict just the masked position or the full sequence? When I use the
generate function with masked sequences I noticed the model returns full sequence. Is it the default behavior for a model trained on mlm task?
Hi there!
Was ProtT5 trained to predict just the masked position or the full sequence? When I use the
generate
function with masked sequences I noticed the model returns full sequence. Is it the default behavior for a model trained on mlm task?thank you very much