modelscope / swift

ms-swift: Use PEFT or Full-parameter to finetune 300+ LLMs or 40+ MLLMs. (Qwen2, GLM4, Internlm2.5, Yi, Llama3, Llava, MiniCPM-V, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
https://github.com/modelscope/swift/blob/main/docs/source/LLM/index.md
Apache License 2.0
2.22k stars 214 forks source link

多卡推理时RuntimeError: CUDA error: device-side assert triggered #654

Closed vivixx1 closed 3 months ago

vivixx1 commented 3 months ago

尝试用增量权重和lora合并权重推理,均会出现RuntimeError: CUDA error: device-side assert triggered,但是在训练的时候不会出现问题 这是增量权重代码 import json import os from transformers.debug_utils import DebugUnderflowOverflow from modelscope import GenerationConfig

os.environ['CUDA_LAUNCH_BLOCKING'] = '1' os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'

from swift.llm import ( get_model_tokenizer, get_template, inference, ModelType, get_default_template_type ) from swift.tuners import Swift

ckpt_dir = '/ai/outfile/qwen1half-72b-chat/v6-20240402-111812/checkpoint-106' model_type = ModelType.qwen1half_72b_chat template_type = get_default_template_type(model_type) model_id_or_path = '/ai/qwenhalf72bchat/Qwen1.5-72B-Chat/'

model, tokenizer = get_model_tokenizer(model_type, model_id_or_path=model_id_or_path, model_kwargs={'device_map': 'auto'})

model = Swift.from_pretrained(model, ckpt_dir, inference_mode=True)

debug_overflow = DebugUnderflowOverflow(model) template = get_template(template_type, tokenizer)

model.generation_config = GenerationConfig( max_new_tokens=8192, temperature=0.9, repetition_penalt=1.05, do_sample = True, max_length=512 )

with open('/ai/123/测试集4.2knn.jsonl', 'r', encoding='utf-8') as f: data = [json.loads(item) for item in f]

output_data = [] flag = 0 for line in data:

query = prompt + '题目:' + line['query']

query = line['query']
response = inference(model, template, query)
print(f'response: {response}')
output_data.append(response)
flag += 1
if flag == 10:
    with open('/ai/outfile/qwen1half_72b_chat_2.jsonl', 'a', encoding='utf-8') as f:
        for line in output_data:
            f.write(json.dumps({'response': line}, ensure_ascii=False) + '\n')
    output_data = []
    flag = 0

这是合并权重推理代码 import json import os from transformers.debug_utils import DebugUnderflowOverflow from modelscope import GenerationConfig

import os os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'

from swift.llm import ( get_model_tokenizer, get_template, inference, ModelType, get_default_template_type )

ckpt_dir = '/ai/outfile/qwen1half-72b-chat/v3-20240331-163441/checkpoint-50-merged/' model_type = ModelType.qwen1half_72b_chat template_type = get_default_template_type(model_type)

model, tokenizer = get_model_tokenizer(model_type, model_kwargs={'device_map': 'auto'}, model_id_or_path=ckpt_dir) debug_overflow = DebugUnderflowOverflow(model) template = get_template(template_type, tokenizer)

with open('/ai/123/测试集4.1knn.jsonl', 'r', encoding='utf-8') as f: data = [json.loads(item) for item in f]

output_data = [] flag = 0 for line in data:

query = prompt + '题目:' + line['query']

query = line['query']
response = inference(model, template, query)
print(f'response: {response}')
output_data.append(response)
flag += 1
if flag == 10:
    with open('/ai/outfile/qwen1half_72b_chat_test2.jsonl', 'a', encoding='utf-8') as f:
        for line in output_data:
            f.write(json.dumps({'response': line}, ensure_ascii=False) + '\n')
    output_data = []
    flag = 0
vivixx1 commented 3 months ago

lora微调时的脚本如下,最大token数设置为8192 export NCCL_P2P_DISABLE=1 export NCCL_IB_DISABLE=1 export NCCL_DEBUG=info export NCCL_SOCKET_IFNAME=eth0 nproc_per_node=6

PYTHONPATH=../../.. \ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5 \ torchrun \ --nproc_per_node=$nproc_per_node \ --master_port 29500 \ /swift/examples/pytorch/llm/llm_sft.py \ --model_type qwen1half-72b-chat --model_id_or_path /ai/qwenhalf72bchat/Qwen1.5-72B-Chat/ \ --model_revision master \ --sft_type lora \ --tuner_backend swift \ --template_type AUTO \ --dtype 'bf16' \ --output_dir /ai/outfile/ \ --ddp_backend nccl \ --custom_train_dataset_path /ai/123/训练集4.1knn_41.jsonl \ --train_dataset_sample -1 \ --num_train_epochs 2 \ --max_length 8192 \ --truncation_strategy delete \ --check_dataset_strategy warning \ --lora_rank 8 \ --lora_alpha 32 \ --lora_dropout_p 0.05 \ --lora_target_modules DEFAULT \ --gradient_checkpointing true \ --batch_size 1 \ --weight_decay 0.1 \ --learning_rate 1e-4 \ --gradient_accumulation_steps $(expr 16 / $nproc_per_node) \ --max_grad_norm 0.5 \ --warmup_ratio 0.03 \ --eval_steps 50 \ --save_total_limit 2 \ --logging_steps 1 \ --use_flash_attn true \ --deepspeed default-zero3 \ --save_only_model true \

vivixx1 commented 3 months ago

这是完整报错信息 ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [37,0,0], thread: [125,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [37,0,0], thread: [126,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [37,0,0], thread: [127,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed. Traceback (most recent call last): File "myinfer_merged.py", line 37, in response = inference(model, template, query) File "/swift/swift/llm/utils/utils.py", line 709, in inference generate_ids = model.generate( File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1592, in generate return self.sample( File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 2696, in sample outputs = self( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl result = forward_call(args, kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(*args, kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/qwen2/modeling_qwen2.py", line 1173, in forward outputs = self.model( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl result = forward_call(args, kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/qwen2/modeling_qwen2.py", line 1058, in forward layer_outputs = decoder_layer( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl result = forward_call(*args, *kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(args, kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/qwen2/modeling_qwen2.py", line 773, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl result = forward_call(*args, *kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(args, kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/qwen2/modeling_qwen2.py", line 711, in forward attn_output = self.o_proj(attn_output) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1574, in _call_impl hook_result = hook(self, args, result) File "/usr/local/lib/python3.8/dist-packages/transformers/debug_utils.py", line 265, in forward_hook self.create_frame(module, input, output) File "/usr/local/lib/python3.8/dist-packages/transformers/debug_utils.py", line 216, in create_frame self.analyse_variable(p, name) File "/usr/local/lib/python3.8/dist-packages/transformers/debug_utils.py", line 196, in analyse_variable self.expand_frame(get_abs_min_max(var, ctx)) File "/usr/local/lib/python3.8/dist-packages/transformers/debug_utils.py", line 295, in get_abs_min_max return f"{abs_var.min():8.2e} {abs_var.max():8.2e} {ctx}" File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 965, in format return self.item().format(format_spec) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

vivixx1 commented 3 months ago

image 训练时出现过这个提示,不过看意思只是影响效果,不会出现别的影响

vivixx1 commented 3 months ago

并且也尝试把测试集的token数量降低到1000以下,同样出现这个情况,有没有可能是硬件原因?

vivixx1 commented 3 months ago

import os import json

os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3' os.environ['NCCL_P2P_DISABLE'] = '1' os.environ['NCCL_IB_DISABLE'] = '1' os.environ['NCCL_DEBUG'] = 'info' os.environ['NCCL_SOCKET_IFNAME'] = 'eth0'

from swift.llm import ( ModelType, get_vllm_engine, get_default_template_type, get_template, inference_vllm ) from swift.tuners import Swift

ckpt_dir = '/ai/outfile/qwen1half-72b-chat/v3-20240331-163441/checkpoint-50-merged/' model_type = ModelType.qwen1half_72b_chat template_type = get_default_template_type(model_type)

llm_engine = get_vllm_engine(model_type, model_id_or_path=ckpt_dir, gpu_memory_utilization=0.95, tensor_parallel_size=4) tokenizer = llm_engine.hf_tokenizer template = get_template(template_type, tokenizer)

with open('/ai/123/测试集4.1knn.jsonl', 'r', encoding='utf-8') as f: data = [json.loads(item) for item in f]

output_data = [] flag = 0 for line in data: query = line['query'] response = inference_vllm(llm_engine, template, [{'query': query}])[0] print(f'response: {response}') output_data.append(response) flag += 1 if flag == 10: with open('/ai/outfile/qwen1half_72b_chat_2.jsonl', 'a', encoding='utf-8') as f: for line in output_data: f.write(json.dumps({'response': line}, ensure_ascii=False) + '\n') output_data = [] flag = 0

尝试用vllm加速推理的方式解决了。。。。虽然也不知道为啥传统的推理不行