SteveKGYang / MentalLLaMA

This repository introduces MentaLLaMA, the first open-source instruction following large language model for interpretable mental health analysis.
MIT License
182 stars 18 forks source link

Training data #2

Open NtaylorOX opened 8 months ago

NtaylorOX commented 8 months ago

Great work and repo.

Whilst I'm aware the actual training likely follows general LLM training scripts/flow. It would be nice to see the training scripts. Is there any plan to upload?

SteveKGYang commented 8 months ago

Thank you very much for your interest. We mostly modified the training architecture of FastChat for current released parts (mostly SFT) of MentaLLaMA, so I'll point you to their repo for now. But we are working towards further enhancing MentaLLaMA with other techniques such as RLHF, and we will release these codes. Stay tuned!

NtaylorOX commented 8 months ago

Thanks for the reply. Very helpful, and looking forward to what is to come.

NtaylorOX commented 8 months ago

Actually, one quick question. To perform the SFT for MentalLLaMA, with the instruction training data for say, the DR task. Do you treat this as a standard auto-regressive objective and combine the "query" and the "gpt-3.5-turbo" response? Just hoping to play around with training some smaller models/architectures myself to have a play

SteveKGYang commented 8 months ago

Yes. This is the standard instruction tuning paradigm. I suggest you base on foundation models with SFT/RLHF (e.g. LLaMA2-chat, Vicuna) as they will facilitate your training process, especially with small training datasets.

NtaylorOX commented 8 months ago

Thought so, was just double checking. Thanks for prompt reply! I'll keep you posted if I develop anything that could be brought into this repo.

SteveKGYang commented 8 months ago

Thanks! Any contributions will be appreciated!