open-compass / opencompass

OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
https://opencompass.org.cn/
Apache License 2.0
4.16k stars 444 forks source link

[Bug] hf_chatglm3_6b评测AFQMC数据集时,自测结果与官方不一致。且自测结果不稳定。 #1202

Open jieheroli opened 5 months ago

jieheroli commented 5 months ago

先决条件

问题类型

我正在使用官方支持的任务/模型/数据集进行评估。

环境

{'CUDA available': True,
 'CUDA_HOME': '/usr/local/cuda-12.3',
 'GCC': 'x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0',
 'GPU 0,1,2,3': 'NVIDIA L40S',
 'MMEngine': '0.10.4',
 'MUSA available': False,
 'NVCC': 'Cuda compilation tools, release 12.3, V12.3.52',
 'OpenCV': '4.9.0',
 'PyTorch': '2.2.2+cu121',
 'PyTorch compiling details': 'PyTorch built with:\n'
                              '  - GCC 9.3\n'
                              '  - C++ Version: 201703\n'
                              '  - Intel(R) oneAPI Math Kernel Library Version '
                              '2022.2-Product Build 20220804 for Intel(R) 64 '
                              'architecture applications\n'
                              '  - Intel(R) MKL-DNN v3.3.2 (Git Hash '
                              '2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)\n'
                              '  - OpenMP 201511 (a.k.a. OpenMP 4.5)\n'
                              '  - LAPACK is enabled (usually provided by '
                              'MKL)\n'
                              '  - NNPACK is enabled\n'
                              '  - CPU capability usage: AVX512\n'
                              '  - CUDA Runtime 12.1\n'
                              '  - NVCC architecture flags: '
                              '-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90\n'
                              '  - CuDNN 8.9.2\n'
                              '  - Magma 2.6.1\n'
                              '  - Build settings: BLAS_INFO=mkl, '
                              'BUILD_TYPE=Release, CUDA_VERSION=12.1, '
                              'CUDNN_VERSION=8.9.2, '
                              'CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, '
                              'CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 '
                              '-fabi-version=11 -fvisibility-inlines-hidden '
                              '-DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO '
                              '-DLIBKINETO_NOROCTRACER -DUSE_FBGEMM '
                              '-DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK '
                              '-DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE '
                              '-O2 -fPIC -Wall -Wextra -Werror=return-type '
                              '-Werror=non-virtual-dtor -Werror=bool-operation '
                              '-Wnarrowing -Wno-missing-field-initializers '
                              '-Wno-type-limits -Wno-array-bounds '
                              '-Wno-unknown-pragmas -Wno-unused-parameter '
                              '-Wno-unused-function -Wno-unused-result '
                              '-Wno-strict-overflow -Wno-strict-aliasing '
                              '-Wno-stringop-overflow -Wsuggest-override '
                              '-Wno-psabi -Wno-error=pedantic '
                              '-Wno-error=old-style-cast -Wno-missing-braces '
                              '-fdiagnostics-color=always -faligned-new '
                              '-Wno-unused-but-set-variable '
                              '-Wno-maybe-uninitialized -fno-math-errno '
                              '-fno-trapping-math -Werror=format '
                              '-Wno-stringop-overflow, LAPACK_INFO=mkl, '
                              'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, '
                              'PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, '
                              'USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, '
                              'USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, '
                              'USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, '
                              'USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, '
                              'USE_ROCM_KERNEL_ASSERT=OFF, \n',
 'Python': '3.10.14 (main, Apr  6 2024, 18:45:05) [GCC 9.4.0]',
 'TorchVision': '0.17.2+cu121',
 'numpy_random_seed': 2147483648,
 'opencompass': '0.2.4+19d7e63',
 'sys.platform': 'linux'}

重现问题 - 代码/配置示例

# hf_chatglm3_6b.py
from opencompass.models import HuggingFaceChatGLM3

api_meta_template = dict(
    round=[
        dict(role='HUMAN', api_role='HUMAN'),
        dict(role='BOT', api_role='BOT', generate=True),
    ]
)

chatglm3_6b=dict(
        type=HuggingFaceChatGLM3,
        abbr='chatglm3-6b-hf',
        #path='THUDM/chatglm3-6b',
        path='/export/custom_models/THUDM/chatglm3-6b',
        #tokenizer_path='THUDM/chatglm3-6b',
        tokenizer_path='/export/custom_models/THUDM/chatglm3-6b',
        model_kwargs=dict(
            device_map='auto',
            trust_remote_code=True,
        ),
        tokenizer_kwargs=dict(
            padding_side='left',
            truncation_side='left',
            trust_remote_code=True,
        ),
        meta_template=api_meta_template,
        max_out_len=100,
        max_seq_len=4096,
        batch_size=1,
        run_cfg=dict(num_gpus=1, num_procs=1)
    )
models = [
    chatglm3_6b
]

## CLUE_afqmc_gen.py
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets import AFQMCDataset_V2
from opencompass.utils.text_postprocessors import first_capital_postprocess

afqmc_reader_cfg = dict(
    input_columns=["sentence1", "sentence2"],
    output_column="label",
    test_split="train")

afqmc_infer_cfg = dict(
    prompt_template=dict(
        type=PromptTemplate,
        template=dict(round=[
            dict(
                role="HUMAN",
                prompt=
                "语句一:“{sentence1}”\n语句二:“{sentence2}”\n语句一与语句二是关于蚂蚁金融产品的疑问,两者所询问的内容是否完全一致?\nA. 不完全一致\nB. 完全一致\n请从“A”,“B”中进行选择。\n答:",
            ),
        ]),
    ),
    retriever=dict(type=ZeroRetriever),
    inferencer=dict(type=GenInferencer),
)

afqmc_eval_cfg = dict(
    evaluator=dict(type=AccEvaluator),
    pred_role="BOT",
    pred_postprocessor=dict(type=first_capital_postprocess),
)

afqmc_datasets = [
    dict(
        abbr="afqmc-dev",
        type=AFQMCDataset_V2,
        path="./data/CLUE/AFQMC/dev.json",
        reader_cfg=afqmc_reader_cfg,
        infer_cfg=afqmc_infer_cfg,
        eval_cfg=afqmc_eval_cfg,
    ),
]

重现问题 - 命令或脚本

python run.py --models hf_chatglm3_6b --datasets CLUE_afqmc_gen

重现问题 - 错误信息

我用hf_chatglm3_6b模型评测CLUE_afqmc_gen数据集时,官方给出的结果是63.9。我自测的结果在57.0~60.0之间波动。

其他信息

【改动】我只把模型加载地址改为了本地加载,而不是从互联网下载。

【几个观察】 1.执行几次相同的命令后,在outputs/default/的不同时间戳目录下发现配置文件是一样的,除了workdir。 2.在在outputs/default/的不同时间戳目录下对同一问题的预测结果有差异。 image (1)

jieheroli commented 5 months ago

经排查,已找到问题。具体如下: 模型类HuggingFaceChatGLM3实例化的时候,generation_kwargs为空。因此给人的假象好像是推理过程do_sample=False,采用greedy search的方式。

但实际的推理过程,调用model类的generate_from_template()方法。这个方法继承自其父类BaseModel。内部调用generate()方法。这个方法是在HuggingFaceChatGLM3中进行了overwrite。在generate()内部又调用了chat()方法,这个chat()方法的定义来自于模型路径中的modeling_chatglm.py文件。此文件在运行过程中动态加载。其中定义了前述的chat()方法。 这个方法的方法签名,见下述代码:

def chat(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, role: str = "user",
             max_length: int = 8192, num_beams=1, do_sample=True, top_p=0.8, temperature=0.8, logits_processor=None,
              **kwargs):

可见,实际上推理时,还是启用了采样策略。这是导致评测结果不稳定的根本原因。