Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.61k stars 167 forks source link

Can I use the --dialog argument for QPEFT in multimodal LLaMA2? #182

Open scy-v opened 3 months ago

scy-v commented 3 months ago

Hi! I noticed that in the document, the .sh scripts for multi-turn finetuning all use the --dialog. Can I add --dialog in alpacaLlava_llamaQformerv2Peft_QF_13B.sh for multimodal LLaMA2 QPEFT under image-text multi-turn conversations?

ChrisLiu6 commented 3 months ago

using --dialog means to use https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/data/conversation/dataset.py instead of the default https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/data/alpaca.py as the dataset class. If your data conforms with the following format:

[{
"conversations": [
  { "from": "human", "value": "some question one" }, 
  { "from": "gpt", "value": "some answer one" },
  { "from": "human", "value": "some question two" }, 
  { "from": "gpt", "value": "some answer two" },
  ....],
"image": "/path/to/image"
}, ...
]

Then it should work.