CAMeL-Lab / CAMeLBERT

Code and models for "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models". EACL 2021, WANLP.
https://aclanthology.org/2021.wanlp-1.10
MIT License
43 stars 10 forks source link

Question regarding pretraining task #4

Closed ghaddarAbs closed 2 years ago

ghaddarAbs commented 2 years ago

Hi,

I just wanted to ask if you pretrained your model on the next sentence prediction task ?

Thanks

balhafni commented 2 years ago

We followed the exact pretraining objectives introduced in the BERT paper. We pretrained using both MLM and NSP.

ghaddarAbs commented 2 years ago

Thanks, it answer my question ... I will close the issue