A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision, qwen-vl, qwen2-vl, phi3-v etc.
Apache License 2.0
190
stars
23
forks
source link
ValueError: No chat template is set for this processor. Please either set the `chat_template` attribute, or provide a chat template as an argument. See https://huggingface.co/docs/transformers/main/en/chat_templating for more information. #58
The model I am using is Llama3-Llava-Next-8b, and I am using a local checkpoint.
The registered model is as follows:
register_model( model_id="llama3-llava-next-8b", model_family_id="llava-1.6", model_hf_path="/DATA/DATA1/gyx/checkpoint/llava/llama3-llava-next-8b" )
After following the two steps in https://github.com/zjysteven/lmms-finetune/issues/13, I still encounter the error:
Traceback (most recent call last):
File "/home/gyx/LLM_Distribution/lmms-finetune-main/train.py", line 205, in <module>
train()
File "/home/gyx/LLM_Distribution/lmms-finetune-main/train.py", line 198, in train
trainer.train()
File "/home/gyx/.local/lib/python3.11/site-packages/transformers/trainer.py", line 2123, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/gyx/.local/lib/python3.11/site-packages/transformers/trainer.py", line 2427, in _inner_training_loop
Unused or unrecognized kwargs: do_pad.
batch_samples, num_items_in_batch = self.get_batch_samples(epoch_iterator, num_batches)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gyx/.local/lib/python3.11/site-packages/transformers/trainer.py", line 5045, in get_batch_samples
batch_samples += [next(epoch_iterator)]
^^^^^^^^^^^^^^^^^^^^
File "/home/gyx/.local/lib/python3.11/site-packages/accelerate/data_loader.py", line 550, in __iter__
current_batch = next(dataloader_iter)
^^^^^^^^^^^^^^^^^^^^^
File "/home/gyx/.local/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 634, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/home/gyx/.local/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data
return self._process_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gyx/.local/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data
data.reraise()
File "/home/gyx/.local/lib/python3.11/site-packages/torch/_utils.py", line 644, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/gyx/.local/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
^^^^^^^^^^^^^^^^^^^^
File "/home/gyx/.local/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
^^^^^^^^^^^^^^^^^^^^^
File "/home/gyx/LLM_Distribution/lmms-finetune-main/collators/llava_1_6.py", line 84, in __call__
temp = self.processor.apply_chat_template(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gyx/.local/lib/python3.11/site-packages/transformers/processing_utils.py", line 1096, in apply_chat_template
raise ValueError(
ValueError: No chat template is set for this processor. Please either set the `chat_template` attribute, or provide a chat template as an argument. See https://huggingface.co/docs/transformers/main/en/chat_templating for more information.
If you could help me resolve this issue, I would greatly appreciate it.
The model I am using is Llama3-Llava-Next-8b, and I am using a local checkpoint. The registered model is as follows:
register_model( model_id="llama3-llava-next-8b", model_family_id="llava-1.6", model_hf_path="/DATA/DATA1/gyx/checkpoint/llava/llama3-llava-next-8b" )
After following the two steps in https://github.com/zjysteven/lmms-finetune/issues/13, I still encounter the error:If you could help me resolve this issue, I would greatly appreciate it.