Closed cristinadece closed 4 years ago
Hi, did you replace --bert_config_file=${DATA_DIR}/uncased_L-24_H-1024_A-16/bert_config.json
, accordingly?
The correct json config file can be found here: https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
(Sorry, I will add this file to BERT_Base_trained_on_MSMARCO.zip
)
I added bert_config.json
to BERT_Base_trained_on_MSMARCO.zip
. You should not see the error anymore if you pass it to --bert_config_file
That was the problem. Thanks for fixing it.
Great, thanks
I added
bert_config.json
toBERT_Base_trained_on_MSMARCO.zip
. You should not see the error anymore if you pass it to--bert_config_file
Could you please add the config and vocab files to the BERT_Large_MSMARCO and BERT_Large_trained_on_TREC_CAR models?
I am trying to run the "run_msmarco.py" script in eval mode (on my server, so not in Colab). I have replaced the uncased_L-24_H-1024_A-16 model with your BERT_Base_trained_on_MSMARCO model on the MSMARCO_tfrecord.tar.gz data, in plain words I'm trying to reproduce your results. However, I get the following error:
ValueError: Shape of variable bert/embeddings/LayerNorm/beta:0 ((1024,)) doesn’t match with shape of tensor bert/embeddings/LayerNorm/beta ([768]) from checkpoint reader.
at line 393for item in result:
.It seems like the expected dimension doesn't match the one received. Do you have any suggestions of what might be the problem and how it can be fixed?