If I run :
tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-v2-xlarge')
get bug:
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a tokenizers library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
https://huggingface.co/microsoft/deberta-v2-xlarge/tree/main
If I run : tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-v2-xlarge')
get bug: ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a
tokenizers
library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.