Closed RashmikaReddy closed 7 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
45bd148
) 95.83% compared to head (368e73c
) 96.32%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
fdcce7e
) 96.15% compared to head (9fcd8ec
) 96.57%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Closing the Pull Request for evaluation metrics
Change Description
Adding changes to add metrics to the inference pipeline in main.py. Added unit test case in test_main.py closes #2
Solution Description
Added BLEU, METEOR evaluation metrics to the inference pipeline.
BLEU score calculation https://www.baeldung.com/cs/nlp-bleu-score#:~:text=BLEU%20(Bilingual%20Evaluation%20Understudy)%20is,%2Danswering%20systems%2C%20and%20chatbots. METEOR score calculation https://huggingface.co/spaces/evaluate-metric/meteor
Code Quality
Project-Specific Pull Request Checklists