hpcaitech / ColossalAI

Making large AI models cheaper, faster and more accessible
https://www.colossalai.org
Apache License 2.0
38.82k stars 4.35k forks source link

about chatGPT three steps #2793

Closed leizhu1989 closed 1 year ago

leizhu1989 commented 1 year ago

📚 The doc issue

hello author! I don't know training correspondence. Maybe My understanding is wrong 。 In /applications/ChatGPT)/examples/ , as far as I think: 'Train with dummy prompt data' is first step of chatGPT, 'Train the reward model' is second step of chatGPT, but I dont't know the three step by RLHF using Pre-training language model with reward model,and what is about 'Train with real prompt data' step ?

zhouzhou12 commented 1 year ago

同问,我也想知道如何用ColossalAI实现ChatGPT的三步训练。

Issues-translate-bot commented 1 year ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


The same question, I also want to know how to use ColossalAI to implement the three-step training of ChatGPT.

cloudfool commented 1 year ago

I think 'Train with dummy prompt data' is the 3rd step of chatGPT,

Muzzypepper commented 1 year ago

我也有同样的问题,求指教

Issues-translate-bot commented 1 year ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I have the same problem, please help

Muzzypepper commented 1 year ago

I think 【train_prompts.py】 is the first step to train SFT, 【train_reward_models.py】 is the second step to train RM, 【train_dummy.py】 uses PPO training, initial_model uses the model of the first step, critic_model uses the model of the second step, so this is the RLHF of the third step. As for train_prompts.py, PPOTrainer is also used. Initial_model and critic_model can use the original pre-trained model. I don’t know if this is the case.

leizhu1989 commented 1 year ago

I think 【train_prompts.py】 is the first step to train SFT, 【train_reward_models.py】 is the second step to train RM, 【train_dummy.py】 uses PPO training, initial_model uses the model of the first step, critic_model uses the model of the second step, so this is the RLHF of the third step. As for train_prompts.py, PPOTrainer is also used. Initial_model and critic_model can use the original pre-trained model. I don’t know if this is the case.

thank you for your reply

yaoing commented 1 year ago

I think train_prompts.py is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.

Muzzypepper commented 1 year ago

I think train_prompts.py is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.

Looking at the paper, the first and second steps use prompt data, and the last step does not seem to require prompt data. I'm not sure either. In addition, do you know how to use the trained model for inference or deployment prediction?

yaoing commented 1 year ago

The train_dummy.py is copy from train_prompts.py, with only one line of code added for generating dummy data.

As we can see by the figure in the paper, the third step uses the prompt data and the gpt3 model to generate some results, and then uses reinforcement learning to learn how to choose better responses. So I think that the third step is actually doing prompt training as well.

As for the model training, I am also exploring it, and there is a lack of data at the moment.

leizhu1989 commented 1 year ago

Looking at the paper, the first and second steps use prompt data, and the last step does not seem to require prompt data. I'm not sure either. In addition, do you know how to use the trained model for inference or deployment prediction?

I think inference like GPT2, it also predicts word one by one, Then load last trained model can be inference like GPT2

leizhu1989 commented 1 year ago

As for the model training, I am also exploring it, and there is a lack of data at the moment.

ok,my qq:805650606

Muzzypepper commented 1 year ago

Thanks for your reply!

cloudfool commented 1 year ago

I think train_prompts.py is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.

Now that we need do the finetune(1st) step by ourselves. Do you know any finetune code that could be integrated into this project?

yaoing commented 1 year ago

I think train_prompts.py is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.

Now that we need do the finetune(1st) step by ourselves. Do you know any finetune code that could be integrated into this project?

Training with the Transformers framework is relatively simple, and there are plenty of tutorials on the web for fine-tuning, or you can refer to the official documentation

ht-zhou commented 1 year ago

Thank you for your feedback, and sorry about late reply. And in /applications/ChatGPT)/examples/ ,we have 3 examples : train_dummy -> show the vanilla way to start training step 3. train_prompts -> use prompts to train in training step 3 trian_reward_model -> to train rm in training step 2 Because training step 1 is a simple supervised finetune progress as many other models, we don't implement it here.

cloudfool commented 1 year ago

Thank you for your feedback, and sorry about late reply. And in /applications/ChatGPT)/examples/ ,we have 3 examples : train_dummy -> show the vanilla way to start training step 3. train_prompts -> use prompts to train in training step 3 trian_reward_model -> to train rm in training step 2 Because training step 1 is a simple supervised finetune progress as many other models, we don't implement it here.

thanks! Could you pls add a vanilla infer code for chatgpt?

wqw547243068 commented 1 year ago

Could you show this simple SFT code ?

graciechen commented 1 year ago

i have the same problem,too~Could you show this simple SFT code ?

binmakeswell commented 1 year ago

Hi @graciechen @wqw547243068 @cloudfool We have updated a lot. Please check the latest code and doc. https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat/examples This issue was closed due to inactivity. Thanks.