facebookresearch / MLQA

New dataset
Other
298 stars 24 forks source link

Multi-lingual BERT Result of Chinese XLT task #13

Closed ztl-35 closed 4 years ago

ztl-35 commented 4 years ago

Hi, I use multi-lingual bert(pre-trained weights are downloaded from Google official github) as my PTM like your paper in Table 5. Your XLT task result of EN-ZH is F1=57.5 / EM = 37.3. But my result of F1 is just about 20%. Our performance has a large margin and I don't think it's the hyperparameters that are causing this gap. Could you please release your source code in this repository. Thanks!