THUDM / CogVLM

a state-of-the-art-level open visual language model | 多模态预训练模型
Apache License 2.0
6.12k stars 417 forks source link

TypeError #530

Open conheaven opened 1 month ago

conheaven commented 1 month ago

System Info / 系統信息

NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2

python -c "import transformers; print(transformers.version)" 4.45.2

Python 3.10.15

Who can help? / 谁可以帮助到您?

@zr

Information / 问题信息

Reproduction / 复现过程

python cli_demo_hf.py --bf16

Expected behavior / 期待表现

输入图片路径和提问后报错 image path >>>>> /data1/khw/cogvlm/dataset/dog.jpeg Human:what is this Traceback (most recent call last): File "/data1/khw/cogvlm/CogVLM/basic_demo/cli_demo_hf.py", line 127, in outputs = model.generate(inputs, gen_kwargs) File "/data1/khw/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/data1/khw/miniconda3/envs/cogvlm/lib/python3.10/site-packages/transformers/generation/utils.py", line 2047, in generate result = self._sample( File "/data1/khw/miniconda3/envs/cogvlm/lib/python3.10/site-packages/transformers/generation/utils.py", line 3055, in _sample model_kwargs = self._update_model_kwargs_for_generation( File "/home/khw/.cache/huggingface/modules/transformers_modules/cogvlm-chat-hf/modeling_cogvlm.py", line 750, in _update_model_kwargs_for_generation model_kwargs["past_key_values"] = self._extract_past_from_model_output( TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'