modelscope / ms-swift

Use PEFT or Full-parameter to finetune 350+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
3.8k stars 325 forks source link

强化学习训练MLLM #2212

Open AnsongLi opened 3 days ago

AnsongLi commented 3 days ago

Describe the feature 希望可以出关于使用强化学习如DPO训练MLLM(qwen2-vl)的最佳实践。

Additional context 个人目前只想通过DPO+lora微调MLLM中语言模型部分,在尝试过程中不断出现错误。

Jintao-Huang commented 3 days ago

请截图一下报错的图

AnsongLi commented 2 days ago

环境配置: 4张A100 cuda 12.2 python 3.11.5 目前有环境冲突无法解决 qwen2-vl 需要版本transformers=4.45.0.dev0 导致和llmuses-0.3.0不适配 不知道是否是导致错误的原因

数据格式:标准的平台支持rlhf 格式,纯文本类的(假如之后想做图片类的rlhf数据格式 目前平台是否有支持

错误: rank0:[W1010 03:00:50.589188483 logger.cpp:330] Warning: Cuda time stats are not collected for multi-device modules. (function operator()) rank0: Traceback (most recent call last): rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/cli/rlhf.py", line 5, in

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/utils/run_utils.py", line 32, in x_main rank0: result = llm_x(args, **kwargs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/llm/rlhf.py", line 25, in llm_rlhf rank0: return trainer_train(

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/llm/sft.py", line 458, in trainer_train

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/trainers/mixin.py", line 424, in train rank0: res = super().train(resume_from_checkpoint, *args, **kwargs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 2021, in train rank0: return inner_training_loop(

rank0: File "/root/miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 2357, in _inner_training_loop rank0: tr_loss_step = self.training_step(model, inputs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 3454, in training_step rank0: loss = self.compute_loss(model, inputs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/trl/trainer/dpo_trainer.py", line 1513, in compute_loss rank0: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")

rank0: File "/root/miniconda3/lib/python3.11/site-packages/trl/trainer/dpo_trainer.py", line 1439, in get_batch_loss_metrics rank0: forward_output = self.concatenated_forward(model, batch)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/trainers/mixin.py", line 717, in concatenated_forward rank0: outputs = model(**model_kwargs, use_cache=False)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl rank0: return self._call_impl(*args, **kwargs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl rank0: return forward_call(*args, **kwargs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1632, in forward rank0: inputs, kwargs = self._pre_forward(*inputs, **kwargs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1523, in _pre_forward rank0: if torch.is_grad_enabled() and self.reducer._rebuild_buckets():

rank0: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by rank0: making sure all forward function outputs participate in calculating loss. rank0: If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). rank0: Parameter indices which did not receive grad for rank 0: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 ... rank0: In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this

Jintao-Huang commented 2 days ago

设置一下 --ddp_find_unused_parameters true

AnsongLi commented 2 days ago

新错误: /root/miniconda3/lib/python3.11/site-packages/torch/utils/checkpoint.py:295: FutureWarning: torch.cpu.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cpu', args...) instead. with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignoreattr-defined: Traceback (most recent call last): rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/cli/rlhf.py", line 5, in

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/utils/run_utils.py", line 32, in x_main rank0: result = llm_x(args, **kwargs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/llm/rlhf.py", line 25, in llm_rlhf rank0: return trainer_train(

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/llm/sft.py", line 458, in trainer_train

rank0: File "/root/miniconda3/lib/python3.11/site-packages/swift/trainers/mixin.py", line 424, in train rank0: res = super().train(resume_from_checkpoint, *args, **kwargs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 2021, in train rank0: return inner_training_loop(

rank0: File "/root/miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 2357, in _inner_training_loop rank0: tr_loss_step = self.training_step(model, inputs)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 3487, in training_step rank0: self.accelerator.backward(loss, **kwargs) rank0: File "/root/miniconda3/lib/python3.11/site-packages/accelerate/accelerator.py", line 2013, in backward

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/_tensor.py", line 521, in backward

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/autograd/init.py", line 289, in backward

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward rank0: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/autograd/function.py", line 306, in apply rank0: return user_fn(self, *args)

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 313, in backward rank0: torch.autograd.backward(outputs_with_grad, args_with_grad) rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/autograd/init.py", line 289, in backward

rank0: File "/root/miniconda3/lib/python3.11/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward rank0: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass

rank0: RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. rank0: Parameter at index 651 with name base_model.model.layers.27.mlp.down_proj.lora_B.default.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.

tastelikefeet commented 2 days ago

--deepspeed default-zero2

AnsongLi commented 1 day ago

感谢,假如之后想做图片类的rlhf,那么数据格式这块是否有支持。因为目前文本类的数据格式是{'query':xxx , 'response':xxx, 'rejected_response':xxx} 。目前是否已有数据格式支持将图片信息加入到rlhf的数据格式中。若无有哪些办法可以解决?