Closed dsouzavijeth closed 5 months ago
HI~
Have you changed the config file?
please make sure that if you want to download LLaMA2 from huggingface, the path in line41
of file multi-modal-eval.yaml
should be meta-llama/Llama-2-7b-chat-hf
. Oterwise, please direct the path to your downloaded directory.
In line41
of file multi-modal-eval.yaml
, I have directed the path to the downloaded directory.
But when I run lhrs_webui.py
, it tries to download from huggingface and does not recognize the downloaded directory.
Thanks for raising your questions.
It's a little bit strange here,
I have test the same code and it'is fine. Could you place try to set your absoluate path in line41
?.
Thanks! This solved the issue.
I have the Llama-2-7b-chat-hf model directory downloaded in my local. The path parameter in multi_modal_eval.yaml is updated to the local directory path.
When I try to run
python lhrs_webui.py -c Config/multi_modal_eval.yaml --checkpoint-path $model_chkpts/Stage1/FINAL.pt --share
, I get the following error:huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'llama/Llama-2-7b-chat'. Use 'repo_type' argument if needed.
OSError: Incorrect path_or_model_id: 'llama/Llama-2-7b-chat'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
Can you please suggest if any change is to be made in the code to access the locally downloaded Llama-2-7b-chat-hf model.