Closed phbradley closed 9 months ago
Hi Phil,
thanks for your interest in our work. You're correct; it should be self.padding[2] (padding for the beta cdr3 chain). As you have noticed, in the GitHub version (for making predictions using the pre-trained MixTCRpred models) we set padding cdr3 beta = padding cdr3 alpha = 20 (line 147 in MixTCRpred.py), but thank you very much for pointing that out!
Best, Giancarlo
Great, thanks Giancarlo for the speedy reply!
Hey there,
Thanks for making this nice code available! I've just started poking around, and I know next to nothing about AI/transformers, but I'm wondering about this line:
https://github.com/GfellerLab/MixTCRpred/blob/main/src/models.py#L40
self.embedding_pos_TRB = PositionWiseEmbedding(self.vocab_size, self.embedding_dim, self.padding[1] + 2*self.padding[-1])
and whether it should be
self.padding[2]
rather thanself.padding[1]
. Of course, they are the same right now (20), so it doesn't matter! But maybe there could be unexpected behavior if one wanted to make cdr3a padding less (or cdr3b greater) at some future time? Also just trying to see if I understand what's going on.Thanks, Phil