As discussed in #7, I added an additional column context length to the table and filled in the data for most of the models.
While it was unambiguous for most of the recent decoder-transformer with a standard positional embedding, here some exceptions:
T5 was trained with a sequence length of 512. However, its use of relative-attention theoretically allows for longer sequences. See the discussion here. As 512 was mainly used during training, I decided to use 512 as this is the value that makes effectively sense given the capabilities obtained during training
Replit Code uses ALiBi which allows for extrapolation of sequence during inference that are longer than the ones seen during training. As I did not find any other information on the maximum context length, I decided to use
infinity as the value. Not 100% sure though.
MPT-7B also uses ALiBi. According to their blog post and their GitHub, the models are trained on up to 65k inputs and can handle up to 84k. Therefore decided to use 84k
SantaCoder + RedPajama-INCITE: Could not find any context length information. Marked them with ?
TODO: Some entries are missing and some are still unclear to me; marked with ?.
As discussed in #7, I added an additional column
context length
to the table and filled in the data for most of the models.While it was unambiguous for most of the recent decoder-transformer with a standard positional embedding, here some exceptions:
T5
was trained with a sequence length of 512. However, its use of relative-attention theoretically allows for longer sequences. See the discussion here. As 512 was mainly used during training, I decided to use512
as this is the value that makes effectively sense given the capabilities obtained during trainingReplit Code
uses ALiBi which allows for extrapolation of sequence during inference that are longer than the ones seen during training. As I did not find any other information on the maximum context length, I decided to useinfinity
as the value. Not 100% sure though.MPT-7B
also uses ALiBi. According to their blog post and their GitHub, the models are trained on up to 65k inputs and can handle up to 84k. Therefore decided to use84k
SantaCoder
+RedPajama-INCITE
: Could not find any context length information. Marked them with?
TODO: Some entries are missing and some are still unclear to me; marked with
?
.