Closed etetteh closed 3 years ago
I keep getting these errors, too.
I have tried installing and using other Python versions. Also, I change Tensorflow version to 1.15.dev20190909. None of the above solved the problem.
Waiting for a possible solution.
I have tried every solution possible, and none is working. I'm just wondering how they trained their model, as I really need this to complete a time-bomb project.
Hi, @etetteh
I think I figured out the problem: max_seq_length
of pretraining configuration must not exceed max_seq_length
when building tfrecords.
I built my tfrecords with max_seq_length
= 128 (the default) so I cannot pretrain with max_seq_length
= 256 or 512.
I tried set max_seq_length
= 128 and trained a small model. Things go smoothly!
Regards,
Great. I was about commenting that I fixed mine too. Same stuff I had to change, plus some environment issues
@briverse17 pretraining was successful but, I am get this error during finetuing. Did you have a similar problem?
I am trying to pretrain my ELECTRA base, I keep getting this output: