Closed exs-fdreyer closed 7 months ago
Hello,
On section 2.4 under "ProtT5" on our paper, we have mentioned the following:
Contrary to the original T5 model which masks spans of multiple tokens, we adopted BERT’s denoising objective to corrupt and reconstruct single tokens using a masking probability of 15 percent.
So we followed Bert style nosing and denosing wirh single sentinel. This means if the original sequence is ""E V Q L V E S G A E", then:
Hello,
I have been trying to understand the protT5 model and how to compute a loss for the full encoder-decoder. Looking through github issues on this repository, it is suggested at multiple places that the format to predict masked residues should be, e.g. for a poly alanine sequence "AAAAA" input: "A A A "
label: " A A"
which is similar to how HuggingFace describes T5 training: https://huggingface.co/docs/transformers/model_doc/t5#training
however, trying this results in a substantially worse loss than simply using the original sequence as label. E.g., running the following code:
shows a negative log likelihood loss of 1.2 for the first and 40 for the second case, with the first one going down as expected as the number of masked residues is reduced, while the second one stays roughly constant. This makes me think that the correct way to further pre-train the model would be to pass the full unmasked sequence as label rather than the masked tokens, is that correct?