I downloaded mMiniLMv2. The compressed package only contains the model file and no tokenizer information. However, from the shape of the embedding, it seems that mMiniLMv2 and mMiniLMv2 may use the same tokenizer.
like this:
from transformers import XLMRobertaTokenizer
tokenizer = XLMRobertaTokenizer.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384")
I downloaded mMiniLMv2. The compressed package only contains the model file and no tokenizer information. However, from the shape of the embedding, it seems that mMiniLMv2 and mMiniLMv2 may use the same tokenizer.
like this: