Open ehartford opened 1 year ago
Your finetune guide (https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/README.md#fine-tuning) still references the 70k dataset
Yep, noticed this too. Perhaps @nlpxucan forgot to update that particular section of the readme during the last commit (2 days ago).
Or maybe they used FastChat rather than Llamax.
gonna presume we are using FastChat until I hear otherwise.
Llamax code it knows how to handle alpaca formatted QA data, but I didnt' see anything in there to handle ShareGPT format data,
How do I finetune with the new format? Your finetune guide (https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/README.md#fine-tuning) still references the 70k dataset