Open dequeueing opened 4 days ago
def load_params_config(model: Union[str, Path],
revision: Optional[str],
token: Optional[str] = None,
**kwargs) -> PretrainedConfig:
# This function loads a params.json config which
# should be used when loading models in mistral format
It turns out that this function is looking for params.json
, but I could not see it in the downloaded dir. Could it be the problem of downloading?
You can try removing it from the cache and downloading it again. cc @patrickvonplaten
Can you make sure to download all:
and then load locally with --tokenizer_format mistral
?
Your current environment
How would you like to use vllm
I want to run inference of a models--mistralai--Mistral-7B-Instruct-v0.3(https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3). I download the model to local using
And I can see that the model are indeed downloaded.
And
config.json
seems to be valid:I use the folloing code to run the model:
as indicated in the hf website.
I get the following error:
I also try to print out the type of config:
Can anyone give a hint on how to solve the problem? Thanks in advance!
Before submitting a new issue...