Closed xiaoxixxxx closed 1 year ago
@asahi417 , put another way, is there anything stopping us from fine-tuning say, llama 2, for QAG with the provided datasets? I am interested in doing this as part of my PhD... :-).
Hi there, i'm quite open for the latest LLM fine-tuning, yet LMQG currently support encoder-decoder LM fine-tuning only, and cannot be used to fine-tune decoder-only LMs including llama and gpt families.
Hey, thank you for your impressive work.
I wonder whether you have considered comparing the generation with LM to the generation with LLM?