modelscope / ms-swift

Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
4.17k stars 369 forks source link

internvl2-llama3-76b 微调报错 #1892

Open zhangfan-algo opened 2 months ago

zhangfan-algo commented 2 months ago

Describe the bug

Additional context torchrun --nproc_per_node ${num_gpu_per_node} --master_port $MASTER_PORT --master_addr $MASTER_ADDR --node_rank $RANK --nnodes $WORLD_SIZE examples/pytorch/llm/llm_sft.py \ --model_cache_dir OpenGVLab/InternVL2-Llama3-76B \ --model_type internvl2-llama3-76b \ --sft_type lora \ --target_modules DEFAULT \ --tuner_backend swift \ --template_type AUTO \ --output_dir output/homework-correction-0830 \ --ddp_backend nccl \ --custom_train_dataset_path homework_correction_train.jsonl \ --dataset_test_ratio 0.01 \ --self_cognition_sample -1 \ --preprocess_num_proc 60 \ --dataloader_num_workers 60 \ --train_dataset_sample -1 \ --dataset_test_ratio 0.01 \ --save_strategy epoch \ --lr_scheduler_type cosine \ --save_total_limit 5 \ --num_train_epochs 5 \ --eval_steps 50 \ --logging_steps 10 \ --max_length 2048 \ --check_dataset_strategy warning \ --gradient_checkpointing true \ --batch_size 4 \ --gradient_accumulation_steps 1 \ --deepspeed_config_path ds_z3_offload_config.json \ --weight_decay 0.01 \ --learning_rate 1e-5 \ --max_grad_norm 0.5 \ --warmup_ratio 0.03 \ --use_flash_attn false \ --save_only_model false \ --save_on_each_node false \ --lazy_tokenize true \ --neftune_noise_alpha 5 \ --dtype AUTO

zhangfan-algo commented 2 months ago

image

Jintao-Huang commented 2 months ago

先拉最新的代码试试

zhangfan-algo commented 2 months ago

先拉最新的代码试试

上午下拉的最新git代码跑的

zhangfan-algo commented 2 months ago

先拉最新的代码试试

大佬还是不太行呢辛苦尽快看下呢

tastelikefeet commented 2 months ago

--tuner_backend peft 可行吗

zhangfan-algo commented 2 months ago

目前使用transformers==4.44.0能run,但是会报Could not estimate the number of tokens of the input, floating-point operations will not be computed,并且微调后的效果明显差于qwen2-vl-7B的效果