Closed hmxiong closed 11 months ago
Thanks for your attention and interest. Through your comment, I think you trained your own checkpoint and test it. The image shows that you used the Vicuna v1.5 delta as LLM, and there maybe such questions.
conversations.py
may need to be updated to corresponding version for the best result.For your case, I would like to suggest you to check whether you merged vicuna delta with LLaMA first.
Appreciate!. According to your guidance, I have already used the merged language model for training, and the subsequent test can also get normal output results. When I went back to check by myself, I found that token_acc was always 0 when raining without fusion.If you have a similar problem, please see:https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#how-to-apply-delta-weights-only-needed-for-weights-v0
before merged:
after:
Really amazing work!I am very happy to learn new knowledge from it, but when I tested it, I found that when I used the fine-tuned model to test the 3D scene, I found that the output was all garbled characters. What is the reason for this situation? The process of instruction tuning is implemented using LAMM_Instruction_10K data and completely following the official tutorial. command: python cli_demo.py \ --model lamm_peft \ --vision_type pcl \ --encoder_pretrain epcl \ --encoder_ckpt_path /path/to/epcl_vit_l/epcl_scannet_vit-L-14_256tokens_latest.pth \ --vicuna_ckpt_path /path/to/vicuna_13b_v1.5_delta \ --delta_ckpt_path /path/to/my_fine_tuned/pytorch_model.pt