Closed zhuango closed 3 years ago
What BERT pre-trained model you use? Do you download the model by the guide?
wget https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-768_A-12.zip -O uncased_L-12_H-768_A-12.zip
@byshiue Thanks, I will try this pre-trained model.
I replaced the pre-trained model and achieved following f1-scores. 89.38% (FP32, seq_length=384) 87.90% (PTQ, ft2, seq_length=384) 88.69% (PTQ, ft1, seq_length=384) which seems good.
I wonder what is the difference between the model you provide and the model from Google BERT repo
We also encounter same issue, but we don't have more idea.
@byshiue Thanks very much~
I recently run post training quantization (PTQ) on BERT Base model for SQuADv1.1 task and fail to achieve the f1-score released at this page.
My script for training:
And script for post training quantization :
PTQ F1-score I got: 78.08% (88.65% fp32, seq_length=384) Released F1-score: 88.30% (89.57% fp32, seq_length=384)
Could you guys help? Thanks~