open-compass / opencompass

OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
https://opencompass.org.cn/
Apache License 2.0
3.19k stars 336 forks source link

[Bug] #650

Closed ALLISWELL8 closed 7 months ago

ALLISWELL8 commented 7 months ago

先决条件

问题类型

我修改了代码(配置不视为代码),或者我正在处理我自己的任务/模型/数据集。

环境

{'CUDA available': True, 'CUDA_HOME': '/usr/local/cuda', 'GCC': 'gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0', 'GPU 0,1,2,3': 'NVIDIA GeForce RTX 4090', 'MMEngine': '0.10.1', 'NVCC': 'Cuda compilation tools, release 12.1, V12.1.105', 'OpenCV': '4.8.1', 'PyTorch': '2.1.1', 'PyTorch compiling details': 'PyTorch built with:\n' ' - GCC 9.3\n' ' - C++ Version: 201703\n' ' - Intel(R) oneAPI Math Kernel Library Version ' '2023.1-Product Build 20230303 for Intel(R) 64 ' 'architecture applications\n' ' - Intel(R) MKL-DNN v3.1.1 (Git Hash ' '64f6bcbcbab628e96f33a62c3e975f8535a7bde4)\n' ' - OpenMP 201511 (a.k.a. OpenMP 4.5)\n' ' - LAPACK is enabled (usually provided by ' 'MKL)\n' ' - NNPACK is enabled\n' ' - CPU capability usage: AVX2\n' ' - CUDA Runtime 12.1\n' ' - NVCC architecture flags: ' '-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90\n' ' - CuDNN 8.9.2\n' ' - Magma 2.6.1\n' ' - Build settings: BLAS_INFO=mkl, ' 'BUILD_TYPE=Release, CUDA_VERSION=12.1, ' 'CUDNN_VERSION=8.9.2, ' 'CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, ' 'CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 ' '-fabi-version=11 -fvisibility-inlines-hidden ' '-DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO ' '-DLIBKINETO_NOROCTRACER -DUSE_FBGEMM ' '-DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK ' '-DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE ' '-O2 -fPIC -Wall -Wextra -Werror=return-type ' '-Werror=non-virtual-dtor -Werror=bool-operation ' '-Wnarrowing -Wno-missing-field-initializers ' '-Wno-type-limits -Wno-array-bounds ' '-Wno-unknown-pragmas -Wno-unused-parameter ' '-Wno-unused-function -Wno-unused-result ' '-Wno-strict-overflow -Wno-strict-aliasing ' '-Wno-stringop-overflow -Wno-psabi ' '-Wno-error=pedantic -Wno-error=old-style-cast ' '-Wno-invalid-partial-specialization ' '-Wno-unused-private-field ' '-Wno-aligned-allocation-unavailable ' '-Wno-missing-braces -fdiagnostics-color=always ' '-faligned-new -Wno-unused-but-set-variable ' '-Wno-maybe-uninitialized -fno-math-errno ' '-fno-trapping-math -Werror=format ' '-Werror=cast-function-type ' '-Wno-stringop-overflow, LAPACK_INFO=mkl, ' 'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, ' 'PERF_WITH_AVX512=1, ' 'TORCH_DISABLE_GPU_ASSERTS=ON, ' 'TORCH_VERSION=2.1.1, USE_CUDA=ON, USE_CUDNN=ON, ' 'USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, ' 'USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, ' 'USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, ' 'USE_OPENMP=ON, USE_ROCM=OFF, \n', 'Python': '3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]', 'TorchVision': '0.16.1', 'numpy_random_seed': 2147483648, 'opencompass': '0.1.9+', 'sys.platform': 'linux'}

重现问题 - 代码/配置示例

from opencompass.models import OpenAI from mmengine.config import read_base from opencompass.models import BaiChuan from opencompass.partitioners import NaivePartitioner from opencompass.runners.local_api import LocalAPIRunner from opencompass.tasks import OpenICLInferTask

with read_base(): from ..summarizers.medium import summarizer from ..datasets.humaneval.humaneval_gen import humaneval_datasets

datasets = [ *humaneval_datasets, ]

models = [ dict( type=OpenAI, # Using the OpenAI model

Parameters for OpenAI initialization

    path='gpt-3.5-turbo',                            # Specify the model type
    key='',                 # OpenAI API Key
    max_seq_len=2048,                        # The max input number of tokens
    # Common parameters shared by various models, not specific to `OpenAI` initialization.
    abbr='GPT-4',                            # Model abbreviation used for result display.
    max_out_len=512,                         # Maximum number of generated tokens.
    batch_size=1,                            # The size of a batch during inference.
    run_cfg=dict(num_gpus=0),                # Resource requirements (no GPU needed)
),

] infer = dict( partitioner=dict(type=NaivePartitioner), runner=dict( type=LocalAPIRunner, max_num_workers=2, concurrent_users=2, task=dict(type=OpenICLInferTask)), )

work_dir = "/public20_data/zs/zs/Chat_API/output/"

重现问题 - 命令或脚本

python run.py /public20_data/zs/zs/Chat_API/configs/api_examples/eval_api_chatgpt.py eval_api_chatgpt.py这个.py文件是我自己写的config

重现问题 - 错误信息

11/30 07:21:10 - OpenCompass - INFO - Task [GPT-4/openai_humaneval] 11/30 07:21:12 - OpenCompass - INFO - Start inferencing [GPT-4/openai_humaneval] [2023-11-30 07:21:13,037] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...

0%| | 0/164 [00:00<?, ?it/s]--- Logging error --- Traceback (most recent call last): File "/public20_data/zs/zs/Chat_API/opencompass/models/openai_api.py", line 235, in _generate return response['choices'][0]['message']['content'].strip() KeyError: 'choices'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 1100, in emit msg = self.format(record) File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 943, in format return fmt.format(record) File "/root/anaconda/envs/opencompass/lib/python3.10/site-packages/mmengine/logging/logger.py", line 124, in format result = logging.Formatter.format(self, record) File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 678, in format record.message = record.getMessage() File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 368, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/root/anaconda/envs/opencompass/lib/python3.10/threading.py", line 973, in _bootstrap self._bootstrap_inner() File "/root/anaconda/envs/opencompass/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/root/anaconda/envs/opencompass/lib/python3.10/threading.py", line 953, in run self._target(*self._args, self._kwargs) File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/thread.py", line 83, in _worker work_item.run() File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, self.kwargs) File "/public20_data/zs/zs/Chat_API/opencompass/models/openai_api.py", line 246, in _generate self.logger.error('Find error message in response: ', File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 1506, in error self._log(ERROR, msg, args, kwargs) File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 1624, in _log self.handle(record) File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 1634, in handle self.callHandlers(record) File "/root/anaconda/envs/opencompass/lib/python3.10/site-packages/mmengine/logging/logger.py", line 306, in callHandlers handler.handle(record) Message: 'Find error message in response: ' Arguments: ("{'message': 'Incorrect API key provided: sk-T2ArH*****SN3N. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}",) --- Logging error --- Traceback (most recent call last): File "/public20_data/zs/zs/Chat_API/opencompass/models/openai_api.py", line 235, in _generate return response['choices'][0]['message']['content'].strip() KeyError: 'choices'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 1100, in emit msg = self.format(record) File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 943, in format return fmt.format(record) File "/root/anaconda/envs/opencompass/lib/python3.10/site-packages/mmengine/logging/logger.py", line 124, in format result = logging.Formatter.format(self, record) File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 678, in format record.message = record.getMessage() File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 368, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/root/anaconda/envs/opencompass/lib/python3.10/threading.py", line 973, in _bootstrap self._bootstrap_inner() File "/root/anaconda/envs/opencompass/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/root/anaconda/envs/opencompass/lib/python3.10/threading.py", line 953, in run self._target(*self._args, self._kwargs) File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/thread.py", line 83, in _worker work_item.run() File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, self.kwargs) File "/public20_data/zs/zs/Chat_API/opencompass/models/openai_api.py", line 246, in _generate self.logger.error('Find error message in response: ', File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 1506, in error self._log(ERROR, msg, args, kwargs) File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 1624, in _log self.handle(record) File "/root/anaconda/envs/opencompass/lib/python3.10/logging/init.py", line 1634, in handle self.callHandlers(record) File "/root/anaconda/envs/opencompass/lib/python3.10/site-packages/mmengine/logging/logger.py", line 306, in callHandlers handler.handle(record) Message: 'Find error message in response: ' Arguments: ("{'message': 'Incorrect API key provided: sk-T2ArH*****SN3N. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}",)

0%| | 0/164 [00:01<?, ?it/s] Traceback (most recent call last): File "/public20_data/zs/zs/Chat_API/opencompass/runners/local_api.py", line 121, in launch inferencer.run(inferencer, tokens) File "/public20_data/zs/zs/Chat_API/opencompass/runners/local_api.py", line 51, in monkey_run self._inference() File "/public20_data/zs/zs/Chat_API/opencompass/tasks/openicl_infer.py", line 121, in _inference inferencer.inference(retriever, File "/public20_data/zs/zs/Chat_API/opencompass/openicl/icl_inferencer/icl_gen_inferencer.py", line 133, in inference results = self.model.generate_from_template( File "/public20_data/zs/zs/Chat_API/opencompass/models/base.py", line 127, in generate_from_template return self.generate(inputs, max_out_len=max_out_len, *kwargs) File "/public20_data/zs/zs/Chat_API/opencompass/models/openai_api.py", line 123, in generate results = list( File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator yield _result_or_cancel(fs.pop()) File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel return fut.result(timeout) File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/_base.py", line 458, in result return self.get_result() File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/_base.py", line 403, in get_result raise self._exception File "/root/anaconda/envs/opencompass/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(self.args, **self.kwargs) File "/public20_data/zs/zs/Chat_API/opencompass/models/openai_api.py", line 250, in _generate raise RuntimeError('Calling OpenAI failed after retrying for ' RuntimeError: Calling OpenAI failed after retrying for 2 times. Check the logs for details.

其他信息

我的目的是测试gpt等接口在humaneval代码数据集上的表现,所以我修改了Chat_API/configs/api_examples/文件夹中加上了eval_api_chatgpt.py文件,然后我把humaneval数据集下载到了本地,但是运行的时候报错了

tonysy commented 7 months ago
{'message': 'Incorrect API key provided: sk-T2ArH*******************************SN3N. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'
ALLISWELL8 commented 7 months ago

我设置了正确的key了

ALLISWELL8 commented 7 months ago

return response['choices'][0]['message']['content'].strip() KeyError: 'choices

tonysy commented 7 months ago

image The program cannot get valid responce, thus caused KeyError

ALLISWELL8 commented 7 months ago

我刚刚又试了一下百度的API测,发现没反应, python run.py /public20_data/zs/zs/Chat_API/configs/api_examples/eval_api_baidu.py 11/30 10:26:50 - OpenCompass - WARNING - SlurmRunner is not used, so the partition argument is ignored. 11/30 10:26:50 - OpenCompass - INFO - Partitioned into 1 tasks. 0%| | 0/1 [00:00<?, ?it/s] 11/30 10:26:50 - OpenCompass - INFO - All the logs and processes for each task should be checked in each infer/.out file. Launch OpenICLInfer[erniebot/openai_humaneval] on CPU
0%| | 0/1 [00:00<?, ?it/s]卡在这里不动了 只改了eval_api_baidu.py文件 python run.py /public20_data/zs/zs/Chat_API/configs/api_examples/eval_api_baidu.py eval_api_baidu.py from mmengine.config import read_base from opencompass.models import ERNIEBot from opencompass.partitioners import NaivePartitioner from opencompass.runners.local_api import LocalAPIRunner from opencompass.tasks import OpenICLInferTask

with read_base(): from ..summarizers.medium import summarizer from ..datasets.humaneval.humaneval_gen import humaneval_datasets datasets = [ *humaneval_datasets, ]

models = [ dict( abbr='erniebot', type=ERNIEBot, path='erniebot', key='erN0rhEMUxKV2MC6R', # please give you key secretkey='HPXg3WGL5dkwIQ5Vq407c0PQ9o', # please give your group_id url='https://aip.baidubce.com/oauth/2.0/token', query_per_second=1, max_out_len=2048, max_seq_len=2048, batch_size=8), ]

infer = dict( partitioner=dict(type=NaivePartitioner), runner=dict( type=LocalAPIRunner, max_num_workers=2, concurrent_users=2, task=dict(type=OpenICLInferTask)), )

work_dir = "/public20_data/zs/zs/Chat_API/output_wenxin/"

tonysy commented 7 months ago

Please add --debug to check what happend