awasthiabhijeet / PIE

Fast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
MIT License
228 stars 40 forks source link

Time taken for multi_round_infer.sh to run? #15

Open pidugusundeep opened 4 years ago

pidugusundeep commented 4 years ago

Iam running the scripts on gitpod and it's taking a long time? May I know how much time it usually takes?

alexrus commented 4 years ago

How big in your test dataset?

melisa-writer commented 4 years ago

Hi, I am facing the same problem. Here are my (unsuccessful) attempts:

Op type not registered 'MapAndBatchDatasetV2' in binary running on n-dfbb99ed-w-0. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed. INFO:tensorflow:prediction_loop marked as finished WARNING:tensorflow:Reraising captured error Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call return fn(*args) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1317, in _run_fn self._extend_graph() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1352, in _extend_graph tf_session.ExtendSession(self._session) tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'MapAndBatchDatasetV2' in binary running on n-dfbb99ed-w-0. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.


Would you mind dropping some hints how to achieve the decoding speed reported in the paper?
alexrus commented 4 years ago

@melisa-qordoba please give more details on how you exported and then imported the estimator.

What was the batch size when you used the exported estimator, still 1? if yes, try with more and see if there is any improvement.

As the paper noted, this was made for accuracy rather than speed.