zjysteven / lmms-finetune

A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, qwen-vl, phi3-v etc.
Apache License 2.0
122 stars 8 forks source link

Fixing "ValueError: No chat template is set for this processor." #13

Open zjysteven opened 1 month ago

zjysteven commented 1 month ago
  1. If you are using local checkpoint (i.e., you are specifying a local model path instead of a huggingface url for xxxx.from_pretrained), then please pull the latest content from the corresponding huggingface model repo. You should see that either 1) there's a chat_template attribute in the tokenizer_config.json file, or 2) there's a chat_template.json file in your checkpoint folder.
  2. Try upgrading the transformers library by python -m pip install --upgrade transformers.

If the error is still there after these two steps, feel free to open an issue and describe your specific case in detail.