rsennrich / subword-nmt

Unsupervised Word Segmentation for Neural Machine Translation and Text Generation
MIT License
2.18k stars 464 forks source link

How to identify the subunits in an encoded text #72

Closed CodingJonas closed 5 years ago

CodingJonas commented 5 years ago

I understand that by removing the @@ symbols I get back to the input text, but how can I identify the smallest subunits in the processed text?

If for example I have di@@ rect, How can I figure out the smallest subunits, as I understand it, it could be {di, rect}, {d, i, rect}, {d, i, re, ct} and so on, since I don'nt know which part of di and which part of rect belongs to the subunit, and which part is unknown to the tokenizer. How do I know what part of a word which contains is part of the binary pair, and which part is the rest of the word?

I'm sorry, if I just got the overall concept wrong, but I can't figure this out.

rsennrich commented 5 years ago

BPE always starts with a character-level segmentation so you start with {d i r e c t} and apply pairwise merge operations until you've reached the maximum number of merge operations (at training time), or until there is no more valid merge operation in your learned list (at test time).

So I'm not sure why you're asking about the smallest subunits (this is always characters). Instead, maybe you should be interested in the largest subword units that are still in-vocabulary. If your segmentation produces "di@@ rect", then "di@@" and "rect" are both in-vocabulary subword units, but "direct" is out-of-vocabulary.

CodingJonas commented 5 years ago

Thank you for your response, you are right, it is more relevant to think about the largest subwords.

I understand your explanation. I hope I don't bother you too much with another question. Just so I get my understanding right, if the final segmentation would be di@@ rec@@ t, how would I know if rec would be the largest subunit, or could it also be re (belonging to the first binary pair di-re) and c (belonging to the second binary pair c-t), but rec is not a learned subword?

rsennrich commented 5 years ago

I'm not sure I get your question.

The final segmentation is produced by a greedy algorithm that iteratively applies the most frequent pairwise merge operation that has been learned on the training set. If you reach the intermediate segmentation (di - re - c - t), and di - re and c - t are the most frequent subword pairs and merged by the algorithm next (instead of re - c), the final segmentation would be dire@@ ct.

CodingJonas commented 5 years ago

Thank you, your explanations helped me understanding how you program works!