open-compass / opencompass

OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
https://opencompass.org.cn/
Apache License 2.0
3.84k stars 406 forks source link

[Bug] AttributeError: 'str' object has no attribute 'shape' #1319

Open wenhuiwwh opened 2 months ago

wenhuiwwh commented 2 months ago

先决条件

问题类型

我正在使用官方支持的任务/模型/数据集进行评估。

环境

{'CUDA available': True, 'CUDA_HOME': '/usr/local/cuda-11.8', 'GCC': 'gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0', 'GPU 0,1': 'NVIDIA A40', 'MMEngine': '0.10.4', 'MUSA available': False, 'NVCC': 'Cuda compilation tools, release 11.8, V11.8.89', 'OpenCV': '4.10.0', 'PyTorch': '2.2.1+cu121', 'PyTorch compiling details': 'PyTorch built with:\n' ' - GCC 9.3\n' ' - C++ Version: 201703\n' ' - Intel(R) oneAPI Math Kernel Library Version ' '2022.2-Product Build 20220804 for Intel(R) 64 ' 'architecture applications\n' ' - Intel(R) MKL-DNN v3.3.2 (Git Hash ' '2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)\n' ' - OpenMP 201511 (a.k.a. OpenMP 4.5)\n' ' - LAPACK is enabled (usually provided by ' 'MKL)\n' ' - NNPACK is enabled\n' ' - CPU capability usage: AVX512\n' ' - CUDA Runtime 12.1\n' ' - NVCC architecture flags: ' '-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90\n' ' - CuDNN 8.9.6 (built against CUDA 11.8)\n' ' - Built with CuDNN 8.9.2\n' ' - Magma 2.6.1\n' ' - Build settings: BLAS_INFO=mkl, ' 'BUILD_TYPE=Release, CUDA_VERSION=12.1, ' 'CUDNN_VERSION=8.9.2, ' 'CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, ' 'CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 ' '-fabi-version=11 -fvisibility-inlines-hidden ' '-DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO ' '-DLIBKINETO_NOROCTRACER -DUSE_FBGEMM ' '-DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK ' '-DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE ' '-O2 -fPIC -Wall -Wextra -Werror=return-type ' '-Werror=non-virtual-dtor -Werror=bool-operation ' '-Wnarrowing -Wno-missing-field-initializers ' '-Wno-type-limits -Wno-array-bounds ' '-Wno-unknown-pragmas -Wno-unused-parameter ' '-Wno-unused-function -Wno-unused-result ' '-Wno-strict-overflow -Wno-strict-aliasing ' '-Wno-stringop-overflow -Wsuggest-override ' '-Wno-psabi -Wno-error=pedantic ' '-Wno-error=old-style-cast -Wno-missing-braces ' '-fdiagnostics-color=always -faligned-new ' '-Wno-unused-but-set-variable ' '-Wno-maybe-uninitialized -fno-math-errno ' '-fno-trapping-math -Werror=format ' '-Wno-stringop-overflow, LAPACK_INFO=mkl, ' 'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, ' 'PERF_WITH_AVX512=1, TORCH_VERSION=2.2.1, ' 'USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, ' 'USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, ' 'USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, ' 'USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, ' 'USE_ROCM_KERNEL_ASSERT=OFF, \n', 'Python': '3.9.16 (main, May 15 2023, 23:46:34) [GCC 11.2.0]', 'TorchVision': '0.17.1+cu121', 'numpy_random_seed': 2147483648, 'opencompass': '0.2.6+', 'sys.platform': 'linux'}

重现问题 - 代码/配置示例

hf_glm4_9b_chat.py代码如下: from opencompass.models import HuggingFacewithChatTemplate

models = [ dict( type=HuggingFacewithChatTemplate, abbr='glm-4-9b-hf-chat', path='/data/whwang22/pretrained_model/glm-4-9b-chat', max_out_len=1024, batch_size=8, run_cfg=dict(num_gpus=1), stop_words=['<|endoftext|>', '<|user|>', '<|observation|>'], ) ] eval_chat_demo.py代码如下: from mmengine.config import read_base

with read_base(): from .datasets.cmb.cmb_gen_dfb5c4 import cmb_datasets from .models.chatglm.hf_glm4_9b_chat import models as hf_glm4_9b_models

datasets = cmb_datasets models = hf_glm4_9b_models

重现问题 - 命令或脚本

python run.py configs/eval_chat_demo.py -w outputs/glm4_chat_demo --debug

重现问题 - 错误信息

outputs = self.model.generate(**tokens, **generation_kwargs)

File "/home/whwang22/.conda/envs/LLM_medicalGPT/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/whwang22/.conda/envs/LLM_medicalGPT/lib/python3.9/site-packages/transformers/generation/utils.py", line 1914, in generate result = self._sample( File "/home/whwang22/.conda/envs/LLM_medicalGPT/lib/python3.9/site-packages/transformers/generation/utils.py", line 2651, in _sample outputs = self( File "/home/whwang22/.conda/envs/LLM_medicalGPT/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/home/whwang22/.conda/envs/LLM_medicalGPT/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) File "/home/whwang22/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 878, in forward transformer_outputs = self.transformer( File "/home/whwang22/.conda/envs/LLM_medicalGPT/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/home/whwang22/.conda/envs/LLM_medicalGPT/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, **kwargs) File "/home/whwang22/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 764, in forward full_attention_mask = self.get_masks(input_ids, past_key_values, padding_mask=attention_mask) File "/home/whwang22/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 658, in get_masks past_length = past_key_values[0][0].shape[2] AttributeError: 'str' object has no attribute 'shape' [2024-07-12 22:02:50,942] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 38296) of binary: /home/whwang22/.conda/envs/LLM_medicalGPT/bin/python

其他信息

No response

keepgoingwang commented 2 months ago

I have encountered the same issue: the model is also chatglm4-9b-chat, and the version of transformers is 4.42.3.

keepgoingwang commented 2 months ago

I have encountered the same issue: the model is also chatglm4-9b-chat, and the version of transformers is 4.42.3.

I have solved this problem, when use this model, the version of transformers should be 4.40.0 https://huggingface.co/THUDM/glm-4-9b-chat/discussions/63