open-compass / opencompass

OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
https://opencompass.org.cn/
Apache License 2.0
3.18k stars 335 forks source link

[Bug] When I attempted to perform the agent evaluation, the console returned an error: "AttributeError: 'OpenAI' object has no attribute 'chat'". #1259

Open CaptainJi opened 2 weeks ago

CaptainJi commented 2 weeks ago

Prerequisite

Type

I'm evaluating with the officially supported tasks/models/datasets.

Environment

/bin/sh: 1: gcc: not found {'CUDA available': False, 'GCC': 'n/a', 'MMEngine': '0.10.4', 'MUSA available': False, 'OpenCV': '4.10.0', 'PyTorch': '2.3.1', 'PyTorch compiling details': 'PyTorch built with:\n' ' - GCC 9.3\n' ' - C++ Version: 201703\n' ' - Intel(R) oneAPI Math Kernel Library Version ' '2023.1-Product Build 20230303 for Intel(R) 64 ' 'architecture applications\n' ' - Intel(R) MKL-DNN v3.3.6 (Git Hash ' '86e6af5974177e513fd3fee58425e1063e7f1361)\n' ' - OpenMP 201511 (a.k.a. OpenMP 4.5)\n' ' - LAPACK is enabled (usually provided by ' 'MKL)\n' ' - NNPACK is enabled\n' ' - CPU capability usage: AVX512\n' ' - Build settings: BLAS_INFO=mkl, ' 'BUILD_TYPE=Release, ' 'CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, ' 'CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 ' '-fabi-version=11 -fvisibility-inlines-hidden ' '-DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO ' '-DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER ' '-DUSE_FBGEMM -DUSE_QNNPACK ' '-DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK ' '-DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC ' '-Wall -Wextra -Werror=return-type ' '-Werror=non-virtual-dtor -Werror=bool-operation ' '-Wnarrowing -Wno-missing-field-initializers ' '-Wno-type-limits -Wno-array-bounds ' '-Wno-unknown-pragmas -Wno-unused-parameter ' '-Wno-unused-function -Wno-unused-result ' '-Wno-strict-overflow -Wno-strict-aliasing ' '-Wno-stringop-overflow -Wsuggest-override ' '-Wno-psabi -Wno-error=pedantic ' '-Wno-error=old-style-cast -Wno-missing-braces ' '-fdiagnostics-color=always -faligned-new ' '-Wno-unused-but-set-variable ' '-Wno-maybe-uninitialized -fno-math-errno ' '-fno-trapping-math -Werror=format ' '-Wno-stringop-overflow, LAPACK_INFO=mkl, ' 'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, ' 'PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, ' 'USE_CUDA=0, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, ' 'USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, ' 'USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, ' 'USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, ' 'USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, ' 'USE_ROCM_KERNEL_ASSERT=OFF, \n', 'Python': '3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0]', 'TorchVision': '0.18.1', 'numpy_random_seed': 2147483648, 'opencompass': '0.2.5+e0d7808', 'sys.platform': 'linux'}

Reproduces the problem - code/configuration sample

from mmengine.config import read_base
from opencompass.models.openai_api import OpenAI
from opencompass.partitioners import SizePartitioner
from opencompass.runners import LocalRunner
from opencompass.tasks import OpenICLInferTask
from opencompass.models.lagent import LagentAgent
from opencompass.lagent.actions.python_interpreter import PythonInterpreter
from lagent import ReAct
from lagent.agents.react import ReActProtocol

with read_base():
    from .datasets.gsm8k.gsm8k_agent_gen_c3dff3 import gsm8k_datasets
    from .datasets.math.math_agent_gen_af2293 import math_datasets
    from .datasets.MathBench.mathbench_gen import mathbench_datasets
    from .summarizers.math_agent import summarizer

datasets = []
datasets += gsm8k_datasets
datasets += math_datasets
datasets += mathbench_datasets

system_prompt = """You are a helpful assistant which use tools to solve mathematical reasoning questions. The code must be a function, and the function name must be 'solution'. For mathematics, please use code tool to calculate. The example format is as follows:

def solution(): variable_names_with_real_meaning = func(variable) return variable_names_with_real_meaning


protocol = dict(
    type=ReActProtocol,
    action=dict(role='ACTION', begin='Tool:', end='\n'),
    action_input=dict(role='ARGS', begin='Tool Input:', end='\n'),
    finish=dict(role='FINISH', begin='FinalAnswer:', end='\n'),
    call_protocol=system_prompt,
)

models = [
    dict(
        abbr='gpt-3.5-react',
        type=LagentAgent,
        agent_type=ReAct,
        max_turn=3,
        llm=dict(
            type=OpenAI,
            path='gpt-3.5-turbo',
            key='ENV',
            query_per_second=1,
            max_seq_len=4096,
        ),
        actions=[
            dict(type=PythonInterpreter),
        ],
        protocol=protocol,
        batch_size=1,
    ),
]

infer = dict(
    partitioner=dict(type=SizePartitioner, max_task_size=1000),
    runner=dict(
        type=LocalRunner,
        max_num_workers=16,
        task=dict(type=OpenICLInferTask)),
)

Reproduces the problem - command or script

python run.py configs/eval_chat_agent.py --debug

Reproduces the problem - error message

06/20 10:56:20 - OpenCompass - DEBUG - Get class `AgentInferencer` from "icl_inferencers" registry in "opencompass"
06/20 10:56:20 - OpenCompass - DEBUG - An `AgentInferencer` instance is built from registry, and its implementation can be found in opencompass.openicl.icl_inferencer.icl_agent_inferencer
[2024-06-20 10:56:20,714] [opencompass.openicl.icl_inferencer.icl_chat_inferencer] [INFO] Starting inference process...
  0%|                                                                                                                                                                                                                                                                      | 0/5 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/opencompass/run.py", line 4, in <module>
    main()
  File "/opencompass/opencompass/cli/main.py", line 298, in main
    runner(tasks)
  File "/opencompass/opencompass/runners/base.py", line 38, in __call__
    status = self.launch(tasks)
  File "/opencompass/opencompass/runners/local.py", line 116, in launch
    task.run()
  File "/opencompass/opencompass/tasks/openicl_infer.py", line 90, in run
    self._inference()
  File "/opencompass/opencompass/tasks/openicl_infer.py", line 135, in _inference
    inferencer.inference(retriever,
  File "/opencompass/opencompass/openicl/icl_inferencer/icl_chat_inferencer.py", line 250, in inference
    self.infer_last(chat, index, output_handler)
  File "/opencompass/opencompass/openicl/icl_inferencer/icl_agent_inferencer.py", line 92, in infer_last
    answer, steps, _ = self.model.chat(chat[user_idx]['content'])
  File "/opencompass/opencompass/models/lagent.py", line 94, in chat
    generation: AgentReturn = self.agent.chat(user_input)
  File "/opt/conda/envs/opencompass/lib/python3.10/site-packages/lagent/agents/react.py", line 233, in chat
    response = self._llm.chat(prompt, **kwargs)
AttributeError: 'OpenAI' object has no attribute 'chat'

Other information

When I try to start the agent evaluation using the command python run.py configs/eval_chat_agent.py --debug, the console reports an error: "AttributeError: 'OpenAI' object has no attribute 'chat'".

CaptainJi commented 2 weeks ago

After testing, it was found that version 0.1.3 of lagent runs normally, while versions 0.2.0 to 0.2.2 do not work.