Closed Rajmehta123 closed 3 years ago
Hi!
On this page you can find evaluation results for the model trained on QQP dataset. The model is not perfect (like any ML model) and has about ~87% accuracy on test set.
https://deeppavlov.readthedocs.io/en/master/features/overview.html#ranking-model-docs
But will it give different results on the same pair of sentences? Like the example on this page, https://deeppavlov.readthedocs.io/en/master/features/models/neural_ranking.html for the same sentence pair, it says 'This is paraphrase'
Hi! Thank you for pointing us to this problem. We will fix it in the future release.
The model weights were corrupted. Please remove the folder {MODELS_PATH}
with the old model. Typically, it is ~/.deeppavlov/models/
folder. After that, download the model with new weights with the code line para_model = build_model(configs.ranking.paraphrase_ident_qqp_interact, download=True)
. Everything should be fine. I close this issue.
Apology for the image magnification. Couldn't scale down the image/screenshot/output.
DeepPavlov version (you can look it up by running
pip show deeppavlov
): 0.12.1Python version: Python 3.7.0
Operating system (ubuntu linux, windows, ...): Linux
Issue: Downloaded the Quora question pairs dataset example by downloading the pretrained model file. https://deeppavlov.readthedocs.io/en/master/features/models/neural_ranking.html Running the following example yields the wrong result. It should have been the paraphrase.
This is the output on my local.
Error (including full traceback):
Incorrect result. No error.