Open wcy1122 opened 1 year ago
Hi @wcy1122
One other user found that re-download the correct checkpoints resolve the similar issue in #104.
Can you make sure that: (1) you downloaded the correct ScienceQA delta; (2) you applied the delta weights to get the correct model weights; (3) the base model weights during the conversion mentioned in step (2) is LLaMA instead of Vicuna.
Thanks.
Hi @haotian-liu , thanks for your reply. It looks weird. I use llama 13b from https://huggingface.co/decapoda-research/llama-13b-hf, and download the delta weight from https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0-science_qa.
Hi. I found that in your release result file here, almost all outputs contain "Assistant:" in the front. But when I inference your release checkpoint, I found that only half of the output contains "Assistant:" in the front. In most cases, the model directly output "\n The answer is A.". I guess it's like something wrong with the inference prompt?
Hi, I met the same problem, how did you solve it?
Question
Hello, I tried to infer your released 13b checkpoint on scienceQA with the latest code. However, the accuracy is only around 40%, which is much lower than the reported 90.89%. Is there something wrong?
My convert command line
My inference command line