Hello,
Thank you for developing such an excellent foundation model. When I tried to use Evo's evo-1-8k-base model for tokenizer, Evo kept reporting errors about evo-1-131k-base (see below). Do you have any suggestions? Any help will be appreciated.
Feng
>> from transformers import AutoTokenizer
>>
>> model_name = 'togethercomputer/evo-1-8k-base'
>> tokenizer = AutoTokenizer.from_pretrained(model_name,
>> trust_remote_code=True, revision="1.1_fix")
/xxx/miniconda3/envs/evo-design/lib/python3.11/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[20], line 4
3 model_name = 'togethercomputer/evo-1-8k-base'
----> 4 tokenizer = AutoTokenizer.from_pretrained(model_name,
5 trust_remote_code=True, revision="1.1_fix")
File /xxx/miniconda3/envs/evo-design/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py:721, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 715 else:
716 raise ValueError(
717 "This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed "
718 "in order to use this tokenizer."
719 )
--> 721 raise ValueError(
722 f"Unrecognized configuration class {config.__class__} to build an AutoTokenizer.\n"
723 f"Model type should be one of {', '.join(c.__name__ for c in TOKENIZER_MAPPING.keys())}."
724 )
ValueError: Unrecognized configuration class <class 'transformers_modules.togethercomputer.evo-1-131k-base.567369e9825aa08b3de4b122fca34fac6a890602.configuration_hyena.StripedHyenaConfig'> to build an AutoTokenizer.
Model type should be one of AlbertConfig, AlignConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, BloomConfig, BridgeTowerConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, ClapConfig, CLIPConfig, CLIPSegConfig, CodeGenConfig, ConvBertConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, DPRConfig, ElectraConfig, ErnieConfig, ErnieMConfig, EsmConfig, FlaubertConfig, FNetConfig, FSMTConfig, FunnelConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GPTSanJapaneseConfig, GroupViTConfig, HubertConfig, IBertConfig, JukeboxConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LlamaConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MgpstrConfig, MobileBertConfig, MPNetConfig, MT5Config, MvpConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OneFormerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, Pix2StructConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, RagConfig, RealmConfig, ReformerConfig, RemBertConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2TextConfig, Speech2Text2Config, SpeechT5Config, SplinterConfig, SqueezeBertConfig, SwitchTransformersConfig, T5Config, TapasConfig, TransfoXLConfig, ViltConfig, VisualBertConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, YosoConfig.
Hello, Thank you for developing such an excellent foundation model. When I tried to use Evo's
evo-1-8k-base
model for tokenizer, Evo kept reporting errors aboutevo-1-131k-base
(see below). Do you have any suggestions? Any help will be appreciated.Feng