Closed YongTaeIn closed 1 month ago
HI~Thanks for you interest.
You should change the path in config for directing to your downloaded folder. The instruction can be founded at here.
Thanks for answer i solve the previous problem. By the way is there config.json file in llama folder which i downloaded from meta website? I executed download.sh file but there isn't any config.file.
Hi~
We deploy our model based on the huggingface version.
Therefore, it's better for you to get lience from here and device whether you need to download all the files or just paste meta-llama/Llama-2-7b-hf
into the config path.
Thanks for helping :)
I enjoyed your project. I would like to inquire because there is an error occurring. After downloading llama2-7b, I was able to successfully log in to huggingface through the terminal.
However, the following error occurs during the inference process.
OSError: llama/Llama-2-7b-chat is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo either by logging in with
huggingface-cli login
or by passingtoken=<your_token>
How can I solve this?