NJU-LHRS / LHRS-Bot

VGI-Enhanced multimodal large language model for remote sensing images.
Apache License 2.0
81 stars 7 forks source link

The inference code tries to download LLaMA2-7B model from HuggingFace #3

Closed dsouzavijeth closed 5 months ago

dsouzavijeth commented 5 months ago

I have the Llama-2-7b-chat-hf model directory downloaded in my local. The path parameter in multi_modal_eval.yaml is updated to the local directory path.

When I try to run python lhrs_webui.py -c Config/multi_modal_eval.yaml --checkpoint-path $model_chkpts/Stage1/FINAL.pt --share, I get the following error: huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'llama/Llama-2-7b-chat'. Use 'repo_type' argument if needed.

OSError: Incorrect path_or_model_id: 'llama/Llama-2-7b-chat'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

Can you please suggest if any change is to be made in the code to access the locally downloaded Llama-2-7b-chat-hf model.

pUmpKin-Co commented 5 months ago

HI~ Have you changed the config file? please make sure that if you want to download LLaMA2 from huggingface, the path in line41 of file multi-modal-eval.yaml should be meta-llama/Llama-2-7b-chat-hf. Oterwise, please direct the path to your downloaded directory.

dsouzavijeth commented 5 months ago

In line41 of file multi-modal-eval.yaml, I have directed the path to the downloaded directory. But when I run lhrs_webui.py, it tries to download from huggingface and does not recognize the downloaded directory.

pUmpKin-Co commented 5 months ago

Thanks for raising your questions. It's a little bit strange here, I have test the same code and it'is fine. Could you place try to set your absoluate path in line41?.

dsouzavijeth commented 5 months ago

Thanks! This solved the issue.