tjake / Jlama

Jlama is a modern LLM inference engine for Java
Apache License 2.0
669 stars 62 forks source link

Bug: Error parsing tokenizer.json when loading embedding model #118

Open Jozurf opened 6 days ago

Jozurf commented 6 days ago

When loading some popular embedding models, I am currently coming across a Jackson parsing error of MismatchInputException from loading of the tokenizer.json. After further investigation, it seems like the datatype of the value of “vocab” key in the tokenizer.json file between some models on hugging face are not consistent. Sometimes, the value of the “vocab” key in tokenizer.json is another nested map, but in other cases, the value of the “vocab” key is an array of arrays. When the “vocab” is an array of arrays, it causes an error in the SafeTensorSupport.loadTokenizermethod, where the TokenizerModel model = om.treeToValue(rootNode.get(”model”), TokenizerModel.class) is unable to parse the JsonNode to a TokenizerModel class because it expects the value of “vocab” to be a BiMap<String, Long>. The stack trace error is pasted below for reference.

This is pretty prevalent in some of the most popular embedding models on hugging face like the sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2, intfloat/multilingual-e5-small, and so forth, both which have array of array as values from the “vocab” key in the tokenizer.json file.

Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize value of type `java.util.LinkedHashMap<java.lang.String,java.lang.Long>` from Array value (token `JsonToken.START_ARRAY`)
 at [Source: UNKNOWN; byte offset: #UNKNOWN] (through reference chain: com.github.tjake.jlama.safetensors.tokenizer.TokenizerModel["vocab"])
        at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
        ...
        at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:342)
        at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4881)
        at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3035) 
        at com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:3499) 
        at com.github.tjake.jlama.safetensors.SafeTensorSupport.loadTokenizer(SafeTensorSupport.java:144) 
        at com.github.tjake.jlama.safetensors.tokenizer.WordPieceTokenizer.<init>(WordPieceTokenizer.java:50) 
        at com.github.tjake.jlama.model.bert.BertTokenizer.<init>(BertTokenizer.java:24) 
        at jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62) 
        at java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) 
        at java.lang.reflect.Constructor.newInstance(Constructor.java:486) 
        at com.github.tjake.jlama.model.ModelSupport.loadModel(ModelSupport.java:186)
        at com.github.tjake.jlama.model.ModelSupport.loadEmbeddingModel(ModelSupport.java:93) 
tjake commented 2 days ago

Hi @Jozurf

I looked into this and this requires Jlama support for unigram tokenizers. (see https://huggingface.co/learn/nlp-course/en/chapter6/7)

This can be done but not as trivial as I was initially hoping