Closed Qnancy closed 1 month ago
I successfully disabled Lora, but the inference answer still contains garbled text
Hi, as you asked in the WeChat group, the stage 2 model is recommended for captioning rather than QA tasks. After removing the lora module, it can generate video-to-text normally, but the conversational ability may decline as expected. If you have any questions, feel free to open this issue or continue asking in the WeChat group. Below is the WeChat for GV Assistant.
I want to test the performance of the
videochat2_mistral
on the dataset after viusl-language alignment in stage 2, so I set checkpoint_math to None, but the inference result contains repeated garbled characters.The initialization code for the model is following
demo_mistral.ipynb
:But the result generated by inference contains repeated garbled characters as follows:
I found that there is no lora on the stage2 image, so I comment the code for adding lora:
But an error occurs when inferecing:
I don't know what went wrong here, I hope to get help. Thank you very much!