01-ai / Yi

A series of large language models trained from scratch by developers @01-ai
https://01.ai
Apache License 2.0
7.59k stars 467 forks source link

Running the model error #473

Closed nas7sou closed 4 months ago

nas7sou commented 5 months ago

Reminder

Environment

- OS:Ubuntu 22.04 
- Python:Python 3.10.12
- PyTorch:2.1.2+cu121
- CUDA:

Please provide either the path to a local folder or the repo_id of a model on the Hub.

Current Behavior

Hello I am a beginner in github and coding, so I think it is a easy problem to solve but I don't know how

Expected Behavior

No response

Steps to Reproduce

What I did:

Error that I had: OSError: Incorrect path_or_model_id: '/home/Yi/VL/model/vit/clip-vit-H-14-laion2B-s32B-b79K-yi-vl-6B-448'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

Anything Else?

No response

BoomStarcuc commented 5 months ago

I got the same error. Have you solved this problem?

Yimi81 commented 5 months ago

@nas7sou Find the config.json file of Yi-VL-6B that you downloaded locally, and change the mm_vision_tower in it to an absolute path

nas7sou commented 5 months ago

Hello @Yimi81, thank you for your response. I did what you say as you can see on the picture below : error_bis

But unfortunately I still have the following error : Traceback (most recent call last): File "/home/milan/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 389, in cached_file resolved_file = hf_hub_download( File "/home/milan/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn validate_repo_id(arg_value) File "/home/milan/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/milan/Yi/VL/model/home/milan/Yi/VL/model/vit/clip-vit-H-14-laion2B-s32B-b79K-yi-vl-6B-448'. Use repo_type argument if needed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/milan/Yi/VL/single_inference.py", line 110, in single_infer(args) File "/home/milan/Yi/VL/single_inference.py", line 32, in single_infer tokenizer, model, image_processor, context_len = load_pretrained_model(model_path) File "/home/milan/Yi/VL/llava/mm_utils.py", line 79, in load_pretrained_model model = LlavaLlamaForCausalLM.from_pretrained( File "/home/milan/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3462, in from_pretrained model = cls(config, *model_args, model_kwargs) File "/home/milan/Yi/VL/llava/model/llava_llama.py", line 44, in init self.model = LlavaLlamaModel(config) File "/home/milan/Yi/VL/llava/model/llava_llama.py", line 36, in init super(LlavaLlamaModel, self).init(config) File "/home/milan/Yi/VL/llava/model/llava_arch.py", line 34, in init self.vision_tower = build_vision_tower(config, delay_load=True) File "/home/milan/Yi/VL/llava/model/clip_encoder/builder.py", line 11, in build_vision_tower return CLIPVisionTower(vision_tower, args=vision_tower_cfg, kwargs) File "/home/milan/Yi/VL/llava/model/clip_encoder/clip_encoder.py", line 19, in init self.cfg_only = CLIPVisionConfig.from_pretrained(self.vision_tower_name) File "/home/milan/.local/lib/python3.10/site-packages/transformers/models/clip/configuration_clip.py", line 251, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, kwargs) File "/home/milan/.local/lib/python3.10/site-packages/transformers/configuration_utils.py", line 644, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, kwargs) File "/home/milan/.local/lib/python3.10/site-packages/transformers/configuration_utils.py", line 699, in _get_config_dict resolved_config_file = cached_file( File "/home/milan/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 454, in cached_file raise EnvironmentError( OSError: Incorrect path_or_model_id: '/home/milan/Yi/VL/model/home/milan/Yi/VL/model/vit/clip-vit-H-14-laion2B-s32B-b79K-yi-vl-6B-448'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

BoomStarcuc commented 5 months ago

@nas7sou I solved this problem. The main problem is that you did not download the models completely.

You need to install a git extension on your linux, please see the following steps:

  1. install Git LFS using apt, run:

    sudo apt update
    sudo apt install git-lfs
  2. Initialize Git LFS, run:

    git lfs install
  3. Navigate to your model directory in your terminal or command prompt, run:

    git lfs pull

    If still not working well, please redownload the model from hugging face after initializing Git LFS, run:

    git clone https://huggingface.co/01-ai/Yi-6B-Chat

Also you can follow the link: https://git-lfs.com/ to see how to install git-lfs. Hope this can help you solving your problems