QwenLM / Qwen-VL

The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
Other
4.76k stars 359 forks source link

[BUG] You can't train a model that has been loaded with `device_map='auto'` in any distributed mode. #218

Open whysirier opened 8 months ago

whysirier commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

按照官方的方法,将device_map = 'auto' 改成 device_map = None 和 device_map = 'baanced' 均不管用。

以下是sh文件配置

#!/bin/bash
export CUDA_DEVICE_MAX_CONNECTIONS=1
DIR=`pwd`

GPUS_PER_NODE=$(python -c 'import torch; print(torch.cuda.device_count())')
NNODES=1
NODE_RANK=0
MASTER_ADDR=localhost
MASTER_PORT=6001

MODEL="/mnt/data/spdi-code/Qwen-SFT/Qwen/Qwen-7B-Chat" # Set the path if you do not want to load from huggingface directly
# ATTENTION: specify the path to your training data, which should be a json file consisting of a list of conversations.
# See the section for finetuning in README for more information.
DATA="/mnt/data/spdi-code/Qwen-SFT/datasets/my_vl_data.json"

function usage() {
    echo '
Usage: bash finetune/finetune_ds.sh [-m MODEL_PATH] [-d DATA_PATH]
'
}

while [[ "$1" != "" ]]; do
    case $1 in
        -m | --model )
            shift
            MODEL=$1
            ;;
        -d | --data )
            shift
            DATA=$1
            ;;
        -h | --help )
            usage
            exit 0
            ;;
        * )
            echo "Unknown argument ${1}"
            exit 1
            ;;
    esac
    shift
done

DISTRIBUTED_ARGS="
    --nproc_per_node $GPUS_PER_NODE \
    --nnodes $NNODES \
    --node_rank $NODE_RANK \
    --master_addr $MASTER_ADDR \
    --master_port $MASTER_PORT
"

torchrun $DISTRIBUTED_ARGS finetune.py \
    --model_name_or_path $MODEL \
    --data_path $DATA \
    --fp16 True \
    --output_dir output_qwen \
    --num_train_epochs 5 \
    --per_device_train_batch_size 1 \
    --per_device_eval_batch_size 1 \
    --gradient_accumulation_steps 16 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 1000 \
    --save_total_limit 10 \
    --learning_rate 1e-5 \
    --weight_decay 0.1 \
    --adam_beta2 0.95 \
    --warmup_ratio 0.01 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --report_to "none" \
    --model_max_length 512 \
    --gradient_checkpointing True \
    --lazy_preprocess True
   # --deepspeed finetune/ds_config_zero3.json

以下是报错信息: 企业微信截图_20240103140557

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:Ubuntu 20.04
- Python:3.8
- Transformers:4.32.0
- PyTorch:2.1.2
- CUDA: 12.2

备注 | Anything else?

No response

whysirier commented 8 months ago

device_map = None 和 device_map = 'balanced' 均不管用,device_map = 'cuda:0'报显存错误

jklj077 commented 8 months ago
python -c 'import torch; print(torch.cuda.device_count())'

看有几张卡,只有一张卡请用带single_gpu的脚本。

qiuwenbogdut commented 8 months ago

try this https://github.com/QwenLM/Qwen/issues/745

whysirier commented 8 months ago
python -c 'import torch; print(torch.cuda.device_count())'

看有几张卡,只有一张卡请用带single_gpu的脚本。

两张V100,device_map = none 必须加上deepspeed才能正常训练,不清楚啥原因

whysirier commented 8 months ago
python -c 'import torch; print(torch.cuda.device_count())'

看有几张卡,只有一张卡请用带single_gpu的脚本。

在Qwen-VL中的脚本默认是device_map = none, 但运行报错: Qwen-VL

而Qwen-7B却能正常训练,都是bash finetune/finetune_lora_ds.sh这个脚本

jklj077 commented 8 months ago

而Qwen-7B却能正常训练,都是bash finetune/finetune_lora_ds.sh这个脚本

Qwen-7B正常的话,应该是VL的问题了,转到VL了。

whysirier commented 8 months ago

而Qwen-7B却能正常训练,都是bash finetune/finetune_lora_ds.sh这个脚本

Qwen-7B正常的话,应该是VL的问题了,转到VL了。

谢谢,在Qwen-VL下我也提问了

jiangliqin commented 6 months ago

@whysirier 请问有解决吗?我也遇到了在V100上用Qwen-14B-Chat

whysirier commented 6 months ago

@whysirier 请问有解决吗?我也遇到了在V100上用Qwen-14B-Chat sh配置问题,默认填了8张卡就报错了,得自己改下, 现在好像官方文档自己改了