Open kargaranamir opened 5 months ago
Group A:
[x] solved in v3, Nllbseed and Flores (zgh_Tfng, tzm_Tfng, taq_Tfng): https://github.com/facebookresearch/flores/issues/63
[x] solved in v3, Flores (yue_Hant): https://github.com/facebookresearch/flores/issues/61
[ ] Nllbseed (ary_Arab): https://github.com/facebookresearch/flores/issues/64"
[x] solved in v3, yue bible is tokenized by space. We should concatenate them without space and then train the LID. update: after concatenation we found the label of yue for this data is not true, we removed this part of data.
[ ] clean further dag_Latn from wikipedia (citations).
Group B:
add snk (GlotWeb, GlotSparse)
add ykg (GlotWeb, GlotSparse)
add srp-Latn (GlotWeb, GlotSparse): hbs languages (hrv, srp, cnr, bos) are misclassified more than other langs.
add domain and multilple langs from Pontoon-Translations: cleaning is a bit challenging
add lzz: https://incubator.wikimedia.org/wiki/Special:PrefixIndex/Wp/lzz
add evn: https://incubator.wikimedia.org/wiki/Special:PrefixIndex/Wp/evn
add syl: https://incubator.wikimedia.org/wiki/Special:PrefixIndex/Wp/syl/
add srb:
add pbs:
add kpo:
bible in australian indigenous languages: https://aboriginalbibles.org.au/
some ebibles in html format: https://download.sabda.org/yesusmesias/__ebible/html/
many langs data for cdc: https://wwwn.cdc.gov/pubs/other-languages
many picture book bible here: https://www.globalrecordings.net/en/scripts for example bkx: https://www.globalrecordings.net/en/script/812 or mhs: https://www.globalrecordings.net/en/script/mhs/395
Group B:
* add domain and multilple langs from [Pontoon-Translations](https://huggingface.co/datasets/ayymen/Pontoon-Translations): cleaning is a bit challenging
Are you talking about cleaning the data itself or the metadata (lang codes)? I intend to release new versions of both Pontoon Translations and Weblate Translations (which has more languages BTW, but probably less quality for LID), but I'm not really sure how I'm going to fix lang codes.
Group B:
* add domain and multilple langs from [Pontoon-Translations](https://huggingface.co/datasets/ayymen/Pontoon-Translations): cleaning is a bit challenging
Are you talking about cleaning the data itself or the metadata (lang codes)? I intend to release new versions of both Pontoon Translations and Weblate Translations (which has more languages BTW, but probably less quality for LID), but I'm not really sure how I'm going to fix lang codes.
about the cleaning, I meant more the tags like <playIcon>
or {$goal}
, for LID it should be removed, or otherwise it learn bad features. It's not too difficult, but it should be done. I will check your HF every once in a while to see if you publish anything new.
Can you clarify why https://github.com/facebookresearch/flores/issues/61 is solved? I don't see any update in their data.
@laubonghaudoi For my project (GlotLID), the issue is resolved because I deleted the yue in my Flores benchmark. This project is GlotLID, which trains a better language identification system. Flores-200 is one of the benchmarks I used.
But to answer your question in general, this issue is not resolved in Flores-200 at its root. They made another project to maintain Flores: https://github.com/openlanguagedata/flores, but that also does not address this issue! Maybe someone needs to bring up this issue in the new project again.
Group A: Please add here any possible speculation to have cleaner sources and evaluation data.
Group B: Please add any possible new sources here, especially those concerning languages not included.