Decode beams method right now returns a tuple of info, and that includes the decoded text. However, for many purposes detailed below, we require the actual token ids that instead of the text to be returned.
With NeMo's decoding framework, it abstracts away how tokens are encoded and decoded, because we can map indidual token ids to their corresponding decoding step. For example
A char model can emit tokens [0, 1, 2, 3] and we can do a simple dictionary lookup mapping it to [' ', 'a', 'b', 'c']
A subword model can emit tokens [0, 1, 2, ] and we can map it with a Sentencepiece detokenization step to [, 'a', 'b', 'c']
Given token ids, we can perform much more careful decoding strategies. But right now that is not possible since only text is returned (or word frames - but again, subwords dont correspond to word frames).
Given token ids, we can further perform accurate word merging with our own algorithms.
Decode beams method right now returns a tuple of info, and that includes the decoded text. However, for many purposes detailed below, we require the actual token ids that instead of the text to be returned.
With NeMo's decoding framework, it abstracts away how tokens are encoded and decoded, because we can map indidual token ids to their corresponding decoding step. For example
A char model can emit tokens [0, 1, 2, 3] and we can do a simple dictionary lookup mapping it to [' ', 'a', 'b', 'c'] A subword model can emit tokens [0, 1, 2, ] and we can map it with a Sentencepiece detokenization step to [, 'a', 'b', 'c']
Given token ids, we can perform much more careful decoding strategies. But right now that is not possible since only text is returned (or word frames - but again, subwords dont correspond to word frames).
Given token ids, we can further perform accurate word merging with our own algorithms.
Can the explicit integer ids be returned ?
FYI @tango4j