Vision-CAIR / MiniGPT-4

Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
https://minigpt-4.github.io
BSD 3-Clause "New" or "Revised" License
25.4k stars 2.91k forks source link

error starting demo.py post LLAMA #103

Open SammyKunimatsu opened 1 year ago

SammyKunimatsu commented 1 year ago

(minigpt4) C:\Users\SammySan\Documents\Code\MiniGPT-4>python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id -1 Initializing Chat Loading VIT Loading VIT Done Loading Q-Former Loading Q-Former Done Loading LLAMA Traceback (most recent call last): File "C:\Users\SammySan\Documents\Code\MiniGPT-4\demo.py", line 60, in model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id)) File "C:\Users\SammySan\Documents\Code\MiniGPT-4\minigpt4\models\mini_gpt4.py", line 243, in from_config model = cls( File "C:\Users\SammySan\Documents\Code\MiniGPT-4\minigpt4\models\mini_gpt4.py", line 86, in init self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model, use_fast=False) File "C:\Users\SammySan.conda\envs\minigpt4\lib\site-packages\transformers\tokenization_utils_base.py", line 1770, in from_pretrained resolved_vocab_files[file_id] = cached_file( File "C:\Users\SammySan.conda\envs\minigpt4\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "C:\Users\SammySan.conda\envs\minigpt4\lib\site-packages\huggingface_hub\utils_validators.py", line 112, in _inner_fn validate_repo_id(arg_value) File "C:\Users\SammySan.conda\envs\minigpt4\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/path/to/vicuna/weights/'. Use repo_type argument if needed.

taomanwai commented 1 year ago

same here

TsuTikgiau commented 1 year ago

Hello! In your error message, I saw the placeholder of the vicuna weights path '/path/to/vicuna/weights/'. It looks like you don't set the vicuna weight path in the config file. You can find more details in the vicuna weight preparation part in read me. Thanks!

sonygod commented 1 year ago

got the same error here ,here my config


model:
  arch: mini_gpt4
  model_type: pretrain_vicuna
  freeze_vit: True
  freeze_qformer: True
  max_txt_len: 160
  end_sym: "###"
  low_resource: True
  prompt_path: "prompts/alignment.txt"
  prompt_template: '###Human: {} ###Assistant: '
  ckpt: '/content/MiniGPT-4/prerained_minigpt4_7b.pth'

datasets:
  cc_sbu_align:
    vis_processor:
      train:
        name: "blip2_image_eval"
        image_size: 224
    text_processor:
      train:
        name: "blip_caption"

run:
  task: image_text_pretrain
sonygod commented 1 year ago

en,here is how to fixed this issue

you have to config the llama_model path here instead of /content/MiniGPT-4/eval_configs/minigpt4_eval.yaml

https://github.com/RiseInRose/MiniGPT-4-ZH/blob/main/minigpt4/configs/models/minigpt4.yaml#L16