Closed leizhu1989 closed 1 year ago
同问,我也想知道如何用ColossalAI实现ChatGPT的三步训练。
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
The same question, I also want to know how to use ColossalAI to implement the three-step training of ChatGPT.
I think 'Train with dummy prompt data' is the 3rd step of chatGPT,
我也有同样的问题,求指教
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
I have the same problem, please help
I think 【train_prompts.py】 is the first step to train SFT, 【train_reward_models.py】 is the second step to train RM, 【train_dummy.py】 uses PPO training, initial_model uses the model of the first step, critic_model uses the model of the second step, so this is the RLHF of the third step. As for train_prompts.py, PPOTrainer is also used. Initial_model and critic_model can use the original pre-trained model. I don’t know if this is the case.
I think 【train_prompts.py】 is the first step to train SFT, 【train_reward_models.py】 is the second step to train RM, 【train_dummy.py】 uses PPO training, initial_model uses the model of the first step, critic_model uses the model of the second step, so this is the RLHF of the third step. As for train_prompts.py, PPOTrainer is also used. Initial_model and critic_model can use the original pre-trained model. I don’t know if this is the case.
thank you for your reply
I think train_prompts.py
is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.
I think
train_prompts.py
is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.
Looking at the paper, the first and second steps use prompt data, and the last step does not seem to require prompt data. I'm not sure either. In addition, do you know how to use the trained model for inference or deployment prediction?
The
train_dummy.py
is copy fromtrain_prompts.py
, with only one line of code added for generating dummy data.
As we can see by the figure in the paper, the third step uses the prompt data and the gpt3 model to generate some results, and then uses reinforcement learning to learn how to choose better responses. So I think that the third step is actually doing prompt training as well.
As for the model training, I am also exploring it, and there is a lack of data at the moment.
Looking at the paper, the first and second steps use prompt data, and the last step does not seem to require prompt data. I'm not sure either. In addition, do you know how to use the trained model for inference or deployment prediction?
I think inference like GPT2, it also predicts word one by one, Then load last trained model can be inference like GPT2
As for the model training, I am also exploring it, and there is a lack of data at the moment.
ok,my qq:805650606
Thanks for your reply!
I think
train_prompts.py
is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.
Now that we need do the finetune(1st) step by ourselves. Do you know any finetune code that could be integrated into this project?
I think
train_prompts.py
is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.Now that we need do the finetune(1st) step by ourselves. Do you know any finetune code that could be integrated into this project?
Training with the Transformers framework is relatively simple, and there are plenty of tutorials on the web for fine-tuning, or you can refer to the official documentation
Thank you for your feedback, and sorry about late reply. And in /applications/ChatGPT)/examples/ ,we have 3 examples : train_dummy -> show the vanilla way to start training step 3. train_prompts -> use prompts to train in training step 3 trian_reward_model -> to train rm in training step 2 Because training step 1 is a simple supervised finetune progress as many other models, we don't implement it here.
Thank you for your feedback, and sorry about late reply. And in /applications/ChatGPT)/examples/ ,we have 3 examples : train_dummy -> show the vanilla way to start training step 3. train_prompts -> use prompts to train in training step 3 trian_reward_model -> to train rm in training step 2 Because training step 1 is a simple supervised finetune progress as many other models, we don't implement it here.
thanks! Could you pls add a vanilla infer code for chatgpt?
Could you show this simple SFT code ?
i have the same problem,too~Could you show this simple SFT code ?
Hi @graciechen @wqw547243068 @cloudfool We have updated a lot. Please check the latest code and doc. https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat/examples This issue was closed due to inactivity. Thanks.
📚 The doc issue
hello author! I don't know training correspondence. Maybe My understanding is wrong 。 In /applications/ChatGPT)/examples/ , as far as I think: 'Train with dummy prompt data' is first step of chatGPT, 'Train the reward model' is second step of chatGPT, but I dont't know the three step by RLHF using Pre-training language model with reward model,and what is about 'Train with real prompt data' step ?