Open shaltielshmid opened 7 months ago
Alternative suggestion:
Create a normalizer + decoder for doing this (assuming BytesToUnicodeDict
and UnicodeToBytesDict
is a static dictionary created using the code above).
public class BytesToUnicodeNormalizer : Normalizer {
public static readonly BytesToUnicodeNormalizer Instance = new();
public override NormalizedString Normalize(string original) {
string normalized = new(Encoding.UTF8.GetBytes(original).Select(b => BytesToUnicodeDict[b]).ToArray());
return new NormalizedString(original, normalized, null, true);
}
}
public class UnicodeToBytesBpeDecoder : TokenizerDecoder {
TokenizerDecoder _decoder;
public UnicodeToBytesBpeDecoder() {
_decoder = new BpeDecoder();
}
public override string Decode(IEnumerable<string> tokens) {
string decoded = _decoder.Decode(tokens);
return Encoding.UTF8.GetString(decoded.Select(b => (byte)UnicodeToBytesDict[b]).ToArray());
}
}
And then we can create the tokenizer like this (the EmptyPreTokenizer
class is a custom PreTokenizer just to make sure that the WhitespaceTokenizer isn't used):
public class HFStyleBPETokenizer : Tokenizer {
public GPT2Tokenizer(string vocabFname, string mergesFname, string unkToken) : base(new Bpe(vocabFname, mergesFname, unkToken), EmptyPreTokenizer.Instance, BytesToUnicodeNormalizer.Instance) {
Decoder = new UnicodeToBytesDecoder();
}
}
Is your feature request related to a problem? Please describe.
I'm requesting this feature after trying to use the GPT2-style tokenizer I trained using HuggingFace in my .NET code. I had trained a model and converted the model to ONNX, but the tokenizer didn't transfer. An exact description of the problem is listed down below.
Describe the solution you'd like
Add support for a flag indicating that the tokenizer came from the HuggingFace BPE trainer, and behind the scenes handle the minor changes required.
Describe alternatives you've considered
Currently I have a class I wrote which wraps a BPE trainer and applies the adjustments before every call to the ML.NET BPE tokenizer.
Additional context
In the HuggingFace BPE code they have a dictionary
bytes_to_unicode()
which is list of utf-8 byte and a mapping to unicode strings. They run every byte in the string through the mapping before running the BPE encoder/decoder. Examples of where it's used can be found here and here and in other places.Before the encoding, they treat the string as bytes and map all the bytes to representative unicode strings, and the same thing during after the decoding.
Real example:
I trained a BPE tokenizer using HuggingFace's
tokenizers.ByteLevelBPETokenizer
. The merges.txt and vocab.json can be found here: https://gist.github.com/shaltielshmid/58b7c1109639eefcd714eb6bfc3eb602.Sample python code:
Sample C# code:
Proposed Solution
Create a static dictionary in the BPE class, which is initialized once:
Then, in the BPE.cs class in the
Tokenize
function here, add the following check:And then in the BPEDecoder.cs file, in the
Decode
function hereWould be happy to compile this into a PR, if relevant.
@luisquintanilla