mandarjoshi90 / coref

BERT for Coreference Resolution
Apache License 2.0
441 stars 92 forks source link

I think there is a typo in experiments #21

Closed fairy-of-9 closed 4 years ago

fairy-of-9 commented 4 years ago

It's original code.

bert_base = ${best}{
  num_docs = 2802
  bert_learning_rate = 1e-05
  task_learning_rate = 0.0002
  max_segment_len = 128
  ffnn_size = 3000
  train_path = ${data_dir}/train.english.128.jsonlines
  eval_path = ${data_dir}/dev.english.128.jsonlines
  conll_eval_path = ${data_dir}/dev.english.v4_gold_conll
  max_training_sentences = 11
  bert_config_file = ${best.log_root}/bert_base/bert_config.json
  vocab_file = ${best.log_root}/bert_base/vocab.txt
  tf_checkpoint = ${best.log_root}/bert_base/model.max.ckpt
  init_checkpoint = ${best.log_root}/bert_base/model.max.ckpt
}

train_bert_base = ${bert_base}{
  tf_checkpoint = ${best.log_root}/cased_L-12_H-768_A-12/bert_model.ckpt
  init_checkpoint = ${best.log_root}/cased_L-12_H-768_A-12/bert_model.ckpt
}

I think below is right. If I want to train at first time.

train_bert_base = ${bert_base}{
  bert_config_file = ${best.log_root}/cased_L-12_H-768_A-12/bert_config.json
  vocab_file = ${best.log_root}/cased_L-12_H-768_A-12/vocab.txt
  tf_checkpoint = ${best.log_root}/cased_L-12_H-768_A-12/bert_model.ckpt
  init_checkpoint = ${best.log_root}/cased_L-12_H-768_A-12/bert_model.ckpt
}
mandarjoshi90 commented 4 years ago

It should not matter since the vocab and config are the same. Feel free to send a PR though :)

fairy-of-9 commented 4 years ago

Thank you