FoundationVision / Groma

[ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization
https://groma-mllm.github.io/
Apache License 2.0
573 stars 61 forks source link

test , find error, local variable 'sentencepiece_model_pb2' referenced before assignment #17

Closed ovjust closed 4 months ago

ovjust commented 5 months ago

when i run this cmd on linux:

        python -m groma.eval.run_groma \
--model-name /mnt/hgfs/E/1MyFiles/code/aiTest/llm_groma-7b-finetune \
--image-file /home/kun/Downloads/arm_vlm/agent_demo_20240527/temp/vl_now.jpg \
--query 请问这个图片的分辨率是多少? \
--quant_type 'none' 

i got error: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. Traceback (most recent call last): File "/home/kun/miniconda3/envs/groma/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/kun/miniconda3/envs/groma/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/kun/Downloads/llm_Groma-main/groma/eval/run_groma.py", line 141, in eval_model(model_name, args.quant_type, args.image_file, args.query) File "/home/kun/Downloads/llm_Groma-main/groma/eval/run_groma.py", line 41, in eval_model tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 727, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, *kwargs) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1854, in from_pretrained return cls._from_pretrained( File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2017, in _from_pretrained tokenizer = cls(init_inputs, **init_kwargs) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 156, in init self.sp_model = self.get_spm_processor() File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 164, in get_spm_processor model_pb2 = import_protobuf() File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 40, in import_protobuf return sentencepiece_model_pb2 UnboundLocalError: local variable 'sentencepiece_model_pb2' referenced before assignment

machuofan commented 5 months ago

This error is related to tokenizer initialization with transformers. What is your transformers version?

ovjust commented 5 months ago

https://huggingface.co/FoundationVision/groma-7b-finetune/tree/main this new version

machuofan commented 5 months ago

Emmm, I mean the transformers package. You can check it using pip show transformers. Make sure you are using transformers==4.32.0.

caichuang0415 commented 5 months ago

Emmm, I mean the transformers package. You can check it using pip show transformers. Make sure you are using transformers==4.32.0.

I have maked it sure that the transformers version is 4.32.0, but still came across this same problem

ovjust commented 5 months ago

Emmm, I mean the transformers package. You can check it using pip show transformers. Make sure you are using transformers==4.32.0.

hello,
which environment can this project run in? i run in vmware linux ubuntu22, has upper error . with transformers==4.32.0.

when i run in windows, pip install -e .

[WARNING] One can disable async_io with DS_BUILD_AIO=0 [ERROR] Unable to pre-compile async_io

i will try in another linux not base of vmware.

machuofan commented 5 months ago

My local environment is Debian 11. But I guess the code could also run on other linux systems.

The issue is most likely to be originated from transformers, as discussed in this issue.

BTW, you may try setting use_fast=True to see if it works, e.g.,

tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
weirui0430 commented 1 month ago

when i run this cmd on linux:

        python -m groma.eval.run_groma \
--model-name /mnt/hgfs/E/1MyFiles/code/aiTest/llm_groma-7b-finetune \
--image-file /home/kun/Downloads/arm_vlm/agent_demo_20240527/temp/vl_now.jpg \
--query 请问这个图片的分辨率是多少? \
--quant_type 'none' 

i got error: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. Traceback (most recent call last): File "/home/kun/miniconda3/envs/groma/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/kun/miniconda3/envs/groma/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/kun/Downloads/llm_Groma-main/groma/eval/run_groma.py", line 141, in eval_model(model_name, args.quant_type, args.image_file, args.query) File "/home/kun/Downloads/llm_Groma-main/groma/eval/run_groma.py", line 41, in eval_model tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 727, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, *kwargs) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1854, in from_pretrained return cls._from_pretrained( File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2017, in _from_pretrained tokenizer = cls(init_inputs, init_kwargs) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 156, in init** self.sp_model = self.get_spm_processor() File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 164, in get_spm_processor model_pb2 = import_protobuf() File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 40, in import_protobuf return sentencepiece_model_pb2 UnboundLocalError: local variable 'sentencepiece_model_pb2' referenced before assignment

Have you resolved this issue?

pengzhansun commented 1 week ago

when i run this cmd on linux:

        python -m groma.eval.run_groma \
--model-name /mnt/hgfs/E/1MyFiles/code/aiTest/llm_groma-7b-finetune \
--image-file /home/kun/Downloads/arm_vlm/agent_demo_20240527/temp/vl_now.jpg \
--query 请问这个图片的分辨率是多少? \
--quant_type 'none' 

i got error: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. Traceback (most recent call last): File "/home/kun/miniconda3/envs/groma/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/kun/miniconda3/envs/groma/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/kun/Downloads/llm_Groma-main/groma/eval/run_groma.py", line 141, in eval_model(model_name, args.quant_type, args.image_file, args.query) File "/home/kun/Downloads/llm_Groma-main/groma/eval/run_groma.py", line 41, in eval_model tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 727, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, *kwargs) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1854, in from_pretrained return cls._from_pretrained( File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2017, in _from_pretrained tokenizer = cls(init_inputs, init_kwargs) File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 156, in init** self.sp_model = self.get_spm_processor() File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 164, in get_spm_processor model_pb2 = import_protobuf() File "/home/kun/miniconda3/envs/groma/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 40, in import_protobuf return sentencepiece_model_pb2 UnboundLocalError: local variable 'sentencepiece_model_pb2' referenced before assignment

Have you resolved this issue?

I try this https://github.com/huggingface/transformers/issues/25848#issuecomment-1698615652

and it works well for me