SEACrowd / seacrowd-datahub

A collaborative project to collect datasets in SEA languages, SEA regions, or SEA cultures.
Apache License 2.0
64 stars 57 forks source link

Closes #629 | Add/Update Dataloader VLSP2020 MT #642

Closed patrickamadeus closed 4 months ago

patrickamadeus commented 5 months ago

Closes #629

Checkbox

Tests

indomain-news (3 splits)

image

basic (1 split)

image

VLSP20-official (1 split)

image

NB

patrickamadeus commented 4 months ago

It's done! @sabilmakbar thankyou for the review ☺️

sabilmakbar commented 4 months ago

By the way, @patrickamadeus, did you happen to observe the different number of samples generated from your dataloader implementation compared to the one reported in their GH?

I noticed two subsets have minor amt of sample diff:

  1. Subset VLSP20-official 789 (reported on Src GH) vs 790 (generated examples)
  2. wiki-alt 20000 (reported on Src GH) vs 20106 (generated examples)
patrickamadeus commented 4 months ago

By the way, @patrickamadeus, did you happen to observe the different number of samples generated from your dataloader implementation compared to the one reported in their GH?

I noticed two subsets have minor amt of sample diff:

  1. Subset VLSP20-official 789 (reported on Src GH) vs 790 (generated examples)
  2. wiki-alt 20000 (reported on Src GH) vs 20106 (generated examples)

Hi @sabilmakbar ! That's interesting, they mentioned that they have N sentences. Actually I splitted the dataset every \n instead of assuming . for the end of the sentence (would even explode the numbers even more).

Since they don't give any clear definition of how to define each "sentence", do you think it's reasonable for now if we assume each line for each sentence? Because I have reviewed each dataset and it's true that each generated example's number matches to each file's line count.

sabilmakbar commented 4 months ago

Hi @sabilmakbar ! That's interesting, they mentioned that they have N sentences. Actually I splitted the dataset every \n instead of assuming . for the end of the sentence (would even explode the numbers even more).

Since they don't give any clear definition of how to define each "sentence", do you think it's reasonable for now if we assume each line for each sentence? Because I have reviewed each dataset and it's true that each generated example's number matches to each file's line count.

Okay then, since I think everything else has the same number as reported, we can acknowledge it (prob adding it to inline comment would be better).

sabilmakbar commented 4 months ago

thanks for the work, @patrickamadeus! let's wait for @raileymontalan's review

holylovenia commented 4 months ago

Hi @raileymontalan, I would like to let you know that we plan to finalize the calculation of the open contributions (e.g., dataloader implementations) in 31 hours, so it'd be great if we could wrap up the reviewing and merge this PR before then.

cc: @patrickamadeus