We already statistically generate word alignment information, it should be possible to go through parallel datasets, and generate word pairs of the most common words that are aligned. Since the alignments are subword units, the algorithm should match up the left side with the right side at the word-like units. This could then be used to generate a dataset of single word translations based on a variety of domain of data. The statistical distribution of those word pairs could also be computed.
At this point each aligned word pair would be equally as likely sampled as the next. However, the decoder should produce a statistical distribution, so we should consider strategies of how to present multiple examples of the same words. The dataset could generate duplicates of words based on the statistical distribution (after the deduplication step), or maybe OpusTrainer could produce the words on a certain distribution as an augmentation filter of some kind.
We already statistically generate word alignment information, it should be possible to go through parallel datasets, and generate word pairs of the most common words that are aligned. Since the alignments are subword units, the algorithm should match up the left side with the right side at the word-like units. This could then be used to generate a dataset of single word translations based on a variety of domain of data. The statistical distribution of those word pairs could also be computed.
At this point each aligned word pair would be equally as likely sampled as the next. However, the decoder should produce a statistical distribution, so we should consider strategies of how to present multiple examples of the same words. The dataset could generate duplicates of words based on the statistical distribution (after the deduplication step), or maybe OpusTrainer could produce the words on a certain distribution as an augmentation filter of some kind.