skywalker023 / sodaverse

πŸ₯€πŸ§‘πŸ»β€πŸš€Code and dataset for our EMNLP 2023 paper - "SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization"
https://aclanthology.org/2023.emnlp-main.799/
MIT License
221 stars 13 forks source link

Fine tune ourselves? #5

Closed gameveloster closed 1 year ago

gameveloster commented 1 year ago

Hi, is it currently possible to take your pretrained model cosmo-xl and fine tune it using conversation text from a certain domain, so that this new set of knowledge can be used during dialogue with cosmo?

skywalker023 commented 1 year ago

Yes, you are free to finetune the model πŸ™ŒπŸ»