project-anuvaad / OpenNMT-py

MIT License
0 stars 0 forks source link

Prediction probability word wise #2

Closed aj7tesh closed 5 years ago

aj7tesh commented 5 years ago

Looking into how we get word wise probabilty of the translated text, so as to handle poor word prediction

aj7tesh commented 5 years ago

It has been found that the model does not give word wise score,as each next word is predicted based on previous word that have already been predicted. Summing up all this, Opennmt outputs a score for the translated sentence. However, the overall score is not self answerable when deciding the quality of translation. For example: But after all, the Chief Justice is a man with all his failings, all the sentiments and all the prejudices which we as common people have and I think to allow the Chief Justice practically a veto upon the appointment of Judges is really to transfer the authority to the Chief Justice which we are not prepared to vest in the President or the Government of the day. लेकिन कुल मिलाकर, मुख्य न्यायाधीश एक व्यक्ति है जिसकी सभी विफलताओं, सभी भावों और सभी पूर्वाग्रहों को हम समान जन के रूप में देखते हैं और मैं सोचता हूं कि न्यायाधीशों की नियुक्ति पर मुख्य न्यायाधीश वास्तव में उस प्राधिकारी को मुख्य न्यायाधीश को हस्तांतरित कर देता है जिसे हम उस दिन के राष्ट्रपति या सरकार में निहित करने के लिए तैयार नहीं हैं। score : -45.57688522338867

Truth be told, there is no one better at capturing the agony and alarm of a woman in the throes of a nervous breakdown than Moore. जैसा कि कहा जा सकता है, मोरे की तुलना में किसी भी स्त्री की पीड़ा और भय पर काबू पाने में कोई बेहतर नहीं है। score:-24.36956024169922

SCORE is negative loglikehood of the prediciton sequence. By using the above scores, it looks like sentence 2 is better quality than sentence 1. However, this is not the case. Sentence 1 captures the meaning more precisely than second sentence. The prediction score very much depends upon the length of sentence, hence difficult to compare directly