By default, the tokenizer adds special tokens to the "input_ids", specifically [CLS] at the beginning and [SEP] at the end of each token array. Was DNABERT-2 trained with these tokens present? If so, has the [CLS] token been used for finetuning, as an alternative to mean pooling?
By default, the tokenizer adds special tokens to the "input_ids", specifically [CLS] at the beginning and [SEP] at the end of each token array. Was DNABERT-2 trained with these tokens present? If so, has the [CLS] token been used for finetuning, as an alternative to mean pooling?
Thanks for the model!