yya518 / FinBERT

A Pretrained BERT Model for Financial Communications. https://arxiv.org/abs/2006.08097
Apache License 2.0
569 stars 130 forks source link

What is the max_seq_len on which this model (FinBert_FinVocab_Uncased) has been trained? #7

Closed man0007 closed 4 years ago

yya518 commented 4 years ago

Following the original BERT training, we set a maximum sentence length of 128 tokens, and train the model until the training loss starts to converge. We then continue training the model allowing sentence lengths up to 512 tokens.

man0007 commented 4 years ago

Which one would give better accuracy? max-seq 256 or 512?