Open zhaozh10 opened 7 months ago
Now that LoRA has been a very popular PEFT technique since Spring 2023 and LLaVA also offers it, what's the difference between PeFoMed and LLaVA?
I have the same question
Me too.
PeFoMed is a light-weight two-stage framework for efficiently fine-tuning models trained in the general domain for various downstream medical applications, where we adopt PEFT technique, instead of full fine-tuning of the LLM like the method in LLAVA. And in the updated paper, we have explored the pitfalls of conventional lexical metrics used in LLM-based generative tasks and systematically analyzed the discrepancy between human evaluations and those conducted using GPT-4, which advocates for the use of GPT-4, or other LLM models, as a measurement engine for generative tasks.
PeFoMed is a light-weight two-stage framework for efficiently fine-tuning models trained in the general domain for various downstream medical applications, where we adopt PEFT technique, instead of full fine-tuning of the LLM like the method in LLAVA. And in the updated paper, we have explored the pitfalls of conventional lexical metrics used in LLM-based generative tasks and systematically analyzed the discrepancy between human evaluations and those conducted using GPT-4, which advocates for the use of GPT-4, or other LLM models, as a measurement engine for generative tasks.
This is exactly where I got confused, because LLaVA has provided LoRA support for researchers with limited computing resources since [2023/6/11], as demonstrated in this script. It's a quite common and plain idea to simply apply LoRA to a foundation model.
Now that LoRA has been a very popular PEFT technique since Spring 2023 and LLaVA also offers it, what's the difference between PeFoMed and LLaVA?