mbzuai-oryx / Video-ChatGPT

[ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representation. We also introduce a rigorous 'Quantitative Evaluation Benchmarking' for video-based conversational models.
https://mbzuai-oryx.github.io/Video-ChatGPT
Creative Commons Attribution 4.0 International
1.23k stars 108 forks source link

RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] #91

Closed rjccv closed 8 months ago

rjccv commented 8 months ago

Hello, I have been trying to use VideoChatGPT for single video inference/video demo and am having diffulties getting set up. I have followed the installation setup as laid out by the readme, and have downloaded the llava weights per this issue post:

git lfs install git clone https://huggingface.co/mmaaz60/LLaVA-7B-Lightening-v1-1

yet, when I run this script: python video_chatgpt/demo/video_demo.py --model-name /home/Video-ChatGPT/LLaVA-7B-Lightening-v1-1 --projection_path /home/Video-ChatGPT/video_chatgpt-7B.bin

I get the following error: RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]

which occurs during the AutoTokenizer.from_pretrained(model_name) call in video_chatgpt/eval/model_utils.py.

I have reattempted downloading the weights on the off chance that the .bin files are corrupted, but the error persists. I have checked that I have the correct package versions, and they seem to match what is in requirements.txt and the config.json file in the /LLaVA-7B-Lightening-v1-1 folder (it expects "transformers_version": "4.28.0.dev0", which is what is installed).

According to these posts, this error could occur from a corrupt .bin file or not using the correct tokenizer. Though from my understanding, AutoTokenizer is a generalized wrapper so I don't think this is what is causing the error.

Any help is appreciated, thanks.

mmaaz60 commented 8 months ago

Hi @rjccv

Thank you for your interest in our work. The error seems to be generated because of either the corrupted checkpoints or a different transformer version. Can you double check if the error persists after installing the transformers using the following command?

pip install transformers@git+https://github.com/huggingface/transformers.git@cae78c46

Further, please provide the environment and hardware details you are using to comment further. Thanks

rjccv commented 8 months ago

Thank you for the quick response. When running this line, a warning in the git log mentions that the branch or tag cae78c46 could not be found:

Screenshot from 2024-03-07 16-00-31 This is the version that appears as installed when I check run conda list transformers Screenshot from 2024-03-07 16-01-09

In LLaVA-7B-Lightening-v1-1/config.json, the "transfromers_version" variable matches the output of my conda environment, so I don't believe that is the cause.

{ "_name_or_path": "liuhaotian/LLaVA-Lightning-7B-delta-v1-1", "architectures": [ "LlavaLlamaForCausalLM" ], ... "transformers_version": "4.28.0.dev0", "tune_mm_mlp_adapter": false, "use_cache": false, "use_mm_proj": true, "vocab_size": 32003 }

I did notice though in LLaVA-7B-Lightening-v1-1/tokenizer_config.json that there is a variable "special_tokens_map_file" that points to a checkpoint folder not present in my directory. Could this be causing the issue? If I need this installed, where can I download it from?

{
...
  "special_tokens_map_file": "./checkpoints/vicuna-7b-v1-1/special_tokens_map.json",
  "tokenizer_class": "LlamaTokenizer",
  "unk_token": {
    "__type": "AddedToken",
    "content": "<unk>",
    "lstrip": false,
    "normalized": true,
    "rstrip": false,
    "single_word": false
  }
}

Alternatively, if this is not the issue and it is corrupted checkpoints, is there another method for downloading them? As mentioned in the link I referenced, huggingface never sends out an email for the Llama weights, so that doesn't seem like an option.

My conda environment is exactly as instructed in the readme and I am attempting to run this on one RTX 3080 Ti GPU.

mmaaz60 commented 8 months ago

Hi @rjccv   Can you try using special tokens maps from https://huggingface.co/lmsys/vicuna-7b-v1.1? Also please try loading the base vicuna model and see if the issue appears there as well. Thanks

rjccv commented 8 months ago

Okay I have changed the path of "special_tokens_maps" to point to the local special_tokens_maps.json file since the two are the same. And I am able to load the base model without any errors when I replace the script as below:

    # # Load tokenizer
    # tokenizer = AutoTokenizer.from_pretrained(model_name)

    # # Load model
    # model = VideoChatGPTLlamaForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, torch_dtype=torch.float16,
    #                                                      use_cache=True)

    tokenizer = AutoTokenizer.from_pretrained("lmsys/vicuna-7b-v1.1")
    model = AutoModelForCausalLM.from_pretrained("lmsys/vicuna-7b-v1.1", low_cpu_mem_usage=True, torch_dtype=torch.float16, 
                                                            use_cache=True)
rjccv commented 8 months ago

I was able to resolve this. The bin files were indeed corrupted, but because I didn't have git-lfs installed globally. No error appeared after git lfs install because of the sequential executing command.

git lfs install
git clone https://huggingface.co/mmaaz60/LLaVA-7B-Lightening-v1-1

After installing this and properly downloading the weights, your model worked as expected. Anyway, thank you for all your help. I'm looking forward to playing around with your model.