Closed simonjoe246 closed 1 year ago
I guess it was just fixed at the latest commit. Would you mind pulling the latest change and trying again?
I guess it was just fixed at the latest commit. Would you mind pulling the latest change and trying again?
emm... Amazing, I pulling the latest commit, and it works.
After check the commit history beetween the not worked 0d574c0 and latest ff5ab92, the only difference involved in true code is the following line one in run.py:
but I am still confused, why this line can leads the result I meet. Can you explain that? Thanks.
This option makes it possible to run several tasks on the same GPU, in case users have a powerful GPU that cannot be fully utilized by a single task. For example, we found that a 7B model does not saturate A100-80GB and the GPU utilization rate is constantly below 50%. Setting it to 32 by default was a mistake.
先决条件
问题类型
我正在使用官方支持的任务/模型/数据集进行评估。
环境
重现问题 - 代码/配置示例
configs/eval.py
configs/hf_chatglm2_6b.py
重现问题 - 命令或脚本
重现问题 - 错误信息
command output:
ceval-logic.out: (other .out file looks similarly)
其他信息