-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature Description
This issue aims to implement a sequence-to-sequence model with an attention mechanism for …
-
### Objective:
To create a user-friendly dashboard for project management that enhances accessibility and streamlines monitoring and reporting through key features.
### Summary:
The dashboard aims to…
-
- Paper: [BLEU: a Method for Automatic Evaluation of Machine Translation](https://aclanthology.org/P02-1040.pdf)
- Blog: https://jaketae.github.io/study/bleu/
- Wiki: https://en.wikipedia.org/wiki/B…
-
Within "evaluation metrics", talk about how ROUGE is not really intended for machine translation, and the pitfalls thereof.
https://stats.stackexchange.com/questions/301626/interpreting-rouge-score…
-
https://aclanthology.org/D12-1097/
-
Great library guys. I dug around in it a little and made this iPython notebook that demonstrates how to use it for tasks outside of Machine Translation.
[Hipster Machine Learning Metrics Made Easy](h…
-
## Issue Description
Hello GitHub community,
I am currently seeking guidance on how to effectively evaluate the MADLAD 400 model, a 7.2B parameter machine translation (MT) model that has been fi…
-
https://github.com/cloudsecurityalliance/continuous-audit-metrics should house components appropriate to tracks 1 -3 of the effort:
1. The metric catalog (propose new metrics, update existing metri…
-
## 🚀 Feature
Add `CharacTER`, a text metric used for NMT evaluation.
**Sources:**
Paper - [CharacTER: Translation Edit Rate on Character Level](https://aclanthology.org/W16-2342.pdf)
[Repo](h…
-
Hi,
Hope you are doing well. I have an idea of a potential contribution to the NLTK translation module.
Currently, in nltk.translate, we have bleu score, meteor score, and other evaluation metrics …