Open rafaelpadilla opened 1 year ago
Thank you for your interest in our work. Yeah unfortunately, it not integrated to huggingface by simply calling its name. We will support this soon. Currently you need to download the weight from the huggingface by git cloning that repo aka downloading the weight locally and specify this model path in the yaml file.
I have the same problem. I followed the instructions on github I cloned the repository I installed the dependencies with pip install -e . I downloaded the model from huggingface using wget and put it into a new subfolder called model I edited the config in ./bliva/configs/models/bliva_vicuna7b.yaml I put the absolute path there which is /home/username/VQA/BLIVA/model/ I tried with filename of the weights and without. I also tried with a trailing slash and without. I always get the following error when I run evaluate.py: OSError: Incorrect path_or_model_id: 'path to vicuna checkpoint'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
Thank you for your interest in our work. Unfortunately wget is not the proper way to download from Huggingface data repo. The proper way to do it is as here: https://github.com/mlpc-ucsd/BLIVA/issues/19#issuecomment-1880332805 Also make sure your path of weight file is up to the pth ending which is bliva_vicuna7b.pth
tried to run the evaluate the following example
, and got this error:
Bliva vicuna is defined in
bliva_vicuna7b.yaml
. However,llm_model
checkpoint is not defined. What model do you recommend to use in each case?I tried
llm_model
withmlpc-lab/BLIVA_Vicuna
, but still doesn't work. Any suggestion?