open-compass / opencompass

OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
https://opencompass.org.cn/
Apache License 2.0
3.78k stars 404 forks source link

[Bug] 单机多卡评测最后一张卡一直卡住 #1494

Open bjzhb666 opened 2 weeks ago

bjzhb666 commented 2 weeks ago

先决条件

问题类型

我正在使用官方支持的任务/模型/数据集进行评估。

环境

{'CUDA available': True, 'CUDA_HOME': '/usr/local/cuda', 'GCC': 'gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0', 'GPU 0,1,2,3,4,5,6,7': 'Tesla V100-SXM2-32GB', 'MMEngine': '0.10.4', 'MUSA available': False, 'NVCC': 'Cuda compilation tools, release 11.8, V11.8.89', 'OpenCV': '4.10.0', 'PyTorch': '2.4.1+cu121', 'PyTorch compiling details': 'PyTorch built with:\n' ' - GCC 9.3\n' ' - C++ Version: 201703\n' ' - Intel(R) oneAPI Math Kernel Library Version ' '2022.2-Product Build 20220804 for Intel(R) 64 ' 'architecture applications\n' ' - Intel(R) MKL-DNN v3.4.2 (Git Hash ' '1137e04ec0b5251ca2b4400a4fd3c667ce843d67)\n' ' - OpenMP 201511 (a.k.a. OpenMP 4.5)\n' ' - LAPACK is enabled (usually provided by ' 'MKL)\n' ' - NNPACK is enabled\n' ' - CPU capability usage: AVX512\n' ' - CUDA Runtime 12.1\n' ' - NVCC architecture flags: ' '-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90\n' ' - CuDNN 90.1 (built against CUDA 12.4)\n' ' - Magma 2.6.1\n' ' - Build settings: BLAS_INFO=mkl, ' 'BUILD_TYPE=Release, CUDA_VERSION=12.1, ' 'CUDNN_VERSION=9.1.0, ' 'CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, ' 'CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 ' '-fabi-version=11 -fvisibility-inlines-hidden ' '-DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO ' '-DLIBKINETO_NOROCTRACER -DUSE_FBGEMM ' '-DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK ' '-DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC ' '-Wall -Wextra -Werror=return-type ' '-Werror=non-virtual-dtor -Werror=bool-operation ' '-Wnarrowing -Wno-missing-field-initializers ' '-Wno-type-limits -Wno-array-bounds ' '-Wno-unknown-pragmas -Wno-unused-parameter ' '-Wno-unused-function -Wno-unused-result ' '-Wno-strict-overflow -Wno-strict-aliasing ' '-Wno-stringop-overflow -Wsuggest-override ' '-Wno-psabi -Wno-error=pedantic ' '-Wno-error=old-style-cast -Wno-missing-braces ' '-fdiagnostics-color=always -faligned-new ' '-Wno-unused-but-set-variable ' '-Wno-maybe-uninitialized -fno-math-errno ' '-fno-trapping-math -Werror=format ' '-Wno-stringop-overflow, LAPACK_INFO=mkl, ' 'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, ' 'PERF_WITH_AVX512=1, TORCH_VERSION=2.4.1, ' 'USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, ' 'USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, ' 'USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, ' 'USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, ' 'USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, ' 'USE_ROCM_KERNEL_ASSERT=OFF, \n', 'Python': '3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0]', 'TorchVision': '0.19.1+cu121', 'lmdeploy': "not installed:No module named 'lmdeploy'", 'numpy_random_seed': 2147483648, 'opencompass': '0.3.1+fb6a0df', 'sys.platform': 'linux', 'transformers': '4.40.2'}

重现问题 - 代码/配置示例

# 原始vicuna7b
python run.py --datasets gsm8k_gen   \
     --hf-path /home/mnt/zhaohongbo/.cache/huggingface/hub/models--lmsys--vicuna-7b-v1.5/snapshots/3321f76e3f527bd14065daf69dad9344000a201d \
     --w './work_dirs/vicuna7b-gsm8k_gen'  \
     --hf-num-gpus 8

重现问题 - 命令或脚本

# 原始vicuna7b
python run.py --datasets gsm8k_gen   \
     --hf-path /home/mnt/zhaohongbo/.cache/huggingface/hub/models--lmsys--vicuna-7b-v1.5/snapshots/3321f76e3f527bd14065daf69dad9344000a201d \
     --w './work_dirs/vicuna7b-gsm8k_gen'  \
     --hf-num-gpus 8

重现问题 - 错误信息

image gpustat看第7张卡利用率一直是0 并且,杀掉程序的时候发现一直卡在lock这里 image

其他信息

请问,我启动多卡的程序是否正确?如果使用 --debug

export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# 原始vicuna7b
python run.py --datasets gsm8k_gen   \
     --hf-path /home/mnt/zhaohongbo/.cache/huggingface/hub/models--lmsys--vicuna-7b-v1.5/snapshots/3321f76e3f527bd14065daf69dad9344000a201d \
     --w './work_dirs/vicuna7b-gsm8k_gen'  \
     --debug

的方式是可以正常运行的,模型没有问题,数据集没有问题

tonysy commented 2 weeks ago

For a 7B model, you can use --max-num-worker for data parrallel. --hf-num-gpus 8 means loading one model on 8 gpus with tensor parralle.

bjzhb666 commented 2 weeks ago

非常感谢及时的回复!问题已经解决,另外想请教一下,有类似于torchrun方式的多机多卡启动方式吗?谢谢

tonysy commented 1 week ago

Actually, we use torchrun for data parrallel. For multi-node multi-gpu, we use slurm, dlc(aliyun) or volc(火山云). You can find more information in runners. Can you provide more information or background for multi-node multi-gpu with torchrun.

bjzhb666 commented 1 week ago

我使用的是公司自己搭建的多机多卡调度配置,比如使用torchrun的时候,命令如下

torchrun --nproc_per_node 8 \
        --nnodes=$WORLD_SIZE \
        --node_rank=$RANK \
        --master_addr=$MASTER_ADDR \
        --master_port=$MASTER_PORT \
        main.py

集群会自动分配$WORLD_SIZE等变量,在多个节点上自动启动