awasthiabhijeet / PIE

Fast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
MIT License
227 stars 40 forks source link

Running pretrained PIE model gets stuck at INFO:tensorflow:Done running local_init_op. #30

Closed eedenong closed 2 years ago

eedenong commented 2 years ago

Hi there, I am trying to run the pretrained PIE model as per the instruction from the repo. However, I get stuck at the point where the output says "INFO:tensorflow:Done running local_init_op." after just the first call of the pie_infer.sh file. I have tried solutions such as commenting out the d.repeat() in the word_edit_model.py but I am still encountering the problem, would really appreciate if I could get some help on this issue, thank you!

EDIT: I have managed to get past that line, but it takes quite long (about 5 minutes). So far it takes about 3 minutes per 10 iterations of the enumeration when enumerating wem_utils.timer(result), is there any way to speed this up? Thank you!

awasthiabhijeet commented 2 years ago

Hi, d.repeat() is not really used during inference. I'm assuming you are currently only trying to do inference.

Could you please confirm whether you are following instructions listed under "Inference using the pretrained PIE ckpt" in the README.md ?

awasthiabhijeet commented 2 years ago

Relevant issues: