Open lsnls opened 7 months ago
Hello, thanks for your outstanding work!
I tested the open source weight: wisdomik/Quilt-Llava-v1.5-7b. Based on my test results, I guess the weight is trained with
LLaVA chckpoint, 7B Language Model
and stage 1 trains 0 epoch and stage 2 trains 3 epochs. Unfortunately, there is a test metric that is quite different from what you documented in your paper, and that is the test results on the closed set ofQuilt-VQA w/ red circle
. My test result was71.3
and you recorded77.78
.I am looking forward to your reply! Trank you a milion!
Hi,
How did you evaluate the model? In quilt_eval.py, where is the 'answer-file-llava-zeorshot.jsonl'? If I set --anchor as None, I only get 'yes/no accuracy = 62.9738'.
I also encounter the same issue. Do you have any solution now?
Add a prompt “Please choose from the following two options: [Yes, No]” may help
Hello, thanks for your outstanding work!
I tested the open source weight: wisdomik/Quilt-Llava-v1.5-7b. Based on my test results, I guess the weight is trained with
LLaVA chckpoint, 7B Language Model
and stage 1 trains 0 epoch and stage 2 trains 3 epochs. Unfortunately, there is a test metric that is quite different from what you documented in your paper, and that is the test results on the closed set ofQuilt-VQA w/ red circle
. My test result was71.3
and you recorded77.78
.I am looking forward to your reply! Trank you a milion!