Open Symbolk opened 1 year ago
Yeah, the paper show the instructions are generated by GPT-4 or GPT-3.5. But, how to generate the response of the new generated instruction is not explicitly shown in their paper. I guess the response should be generated by GPT-4
Yeah, the paper show the instructions are generated by GPT-4 or GPT-3.5. But, how to generate the response of the new generated instruction is not explicitly shown in their paper. I guess the response should be generated by GPT-4
Do you mean only the ###Response is generated by GPT-4? A third choice is generating programming tasks with GPT-3.5, and then pass the task to GPT-4 to generate the solution.
FIY: As the just leaked GPT-4 detail shows (https://threadreaderapp.com/thread/1678545170508267522.html), GPT-4 is trained 2 epochs for text-based data and 4 for code-based data!
@Symbolk my objective is to do just batch inferencing in that case what is going to be my prompt template with an eg. if you can explain thanks in advance
Hi, did you successfully reproduce the training data?
Thanks for this repo! I am also reading the paper recently, but I did not notice which LLM the WizardCoder used to generated their Evol-Instruct data. According to your implementation, gpt4_azure is used, is it the same with WizardCoder (considering that Microsoft insiders could use the API for free since early this year), or you just guess they used GPT-4?