NJU-LHRS / LHRS-Bot

VGI-Enhanced multimodal large language model for remote sensing images.
Apache License 2.0
81 stars 7 forks source link

Where should I place the directory for the llama-2 model? #15

Closed YongTaeIn closed 1 month ago

YongTaeIn commented 1 month ago

I enjoyed your project. I would like to inquire because there is an error occurring. After downloading llama2-7b, I was able to successfully log in to huggingface through the terminal.

However, the following error occurs during the inference process.

OSError: llama/Llama-2-7b-chat is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>

How can I solve this?

pUmpKin-Co commented 1 month ago

HI~Thanks for you interest.

You should change the path in config for directing to your downloaded folder. The instruction can be founded at here.

YongTaeIn commented 1 month ago

Thanks for answer i solve the previous problem. By the way is there config.json file in llama folder which i downloaded from meta website? I executed download.sh file but there isn't any config.file.

pUmpKin-Co commented 1 month ago

Hi~ We deploy our model based on the huggingface version. Therefore, it's better for you to get lience from here and device whether you need to download all the files or just paste meta-llama/Llama-2-7b-hf into the config path.

YongTaeIn commented 1 month ago

Thanks for helping :)