Hello!
Since FinBERT is further trained on BERT, and the original BERT has its own tokenizer, I suppose you are not training your own tokenizer to train the model, right? Otherwise, the model might get confused since the original model was trained with a different tokenizer.
Hello! Since FinBERT is further trained on BERT, and the original BERT has its own tokenizer, I suppose you are not training your own tokenizer to train the model, right? Otherwise, the model might get confused since the original model was trained with a different tokenizer.