Open DavidFarago opened 4 months ago
Since I cannot load models from the huggingface hub (see https://github.com/mistralai/mistral-finetune/issues/27), I am downloading models to a local directory. However, they are either in the format
generation_config.json, pytorch_model.bin.index.json, adapter, special_tokens_map.json, added_tokens.json, pytorch_model-00001-of-00003.bin, tokenizer.model, pytorch_model-00002-of-00003.bin, tokenizer_config.json, config.json, pytorch_model-00003-of-00003.bin
or in the format
config.json, model-00004-of-00006.safetensors, model-00001-of-00006.safetensors , model-00005-of-00006.safetensors, model-00002-of-00006.safetensors, model-00006-of-00006.safetensors, model-00003-of-00006.safetensors, model.safetensors.index.json
Could you either add the flexibility of AutoModel.from_pretrained() to wrapped_model.py or explain how I can store my huggingface models locally in a format that wrapped_model.py can digest?
AutoModel.from_pretrained()
wrapped_model.py
Same with the save_pretrained. I did not see that functionality in the current code base and hence can not leverage open source packages.
Since I cannot load models from the huggingface hub (see https://github.com/mistralai/mistral-finetune/issues/27), I am downloading models to a local directory. However, they are either in the format
or in the format
Could you either add the flexibility of
AutoModel.from_pretrained()
towrapped_model.py
or explain how I can store my huggingface models locally in a format thatwrapped_model.py
can digest?