SteveKGYang / MentalLLaMA

This repository introduces MentaLLaMA, the first open-source instruction following large language model for interpretable mental health analysis.
MIT License
180 stars 19 forks source link

About training code and scripts #5

Open wytbwytb opened 6 months ago

wytbwytb commented 6 months ago

It's a nice work. When will you upload the training code and scripts ?

SteveKGYang commented 6 months ago

We mostly modified the scripts of FastChat (https://github.com/lm-sys/FastChat) for the fine-tuning process. You can look into that.

NirmalManoj commented 5 months ago

Hi @SteveKGYang, great work! Can you please release the code used for training bart-large and T5?

NirmalManoj commented 5 months ago

@SteveKGYang I want to fine-tune with bart-base, but with the same code, processing, etc. that your team used.

biirving commented 1 month ago

Why not just release it bro

Zuhashaik commented 3 days ago

Which fine-tuning method you guys used for this ?

SteveKGYang commented 3 days ago

@Zuhashaik We used full fine-tuning, which means all parameters are tuned.