Open sexan opened 11 months ago
Hi @sexan,
Thank you for your interest in our work.
As for the inference speed, if you use a single node (8 A100s) for inference, the total time is about 1h-1.5h. If you use a single gpu, 10 hours may be right. The padding approach should be in consistent with the original LLaVA and the generation and stopping function. As for the evaluation for coco, under different inference settings, the results will have some difference. If you have checked the visualization results, you can use your present script for the higher result, which we think is acceptable and is encouraged to share with the community with a pull request. Our evaluation script will be released with the entire project soon.
Thanks again for your attention. Hope to get your continual focus and suggestions to improve Griffon.
Thank you for your response.
I've noticed that in your model inference, you've customized a stopping criterion stopping_criteria which adds the "\<s>" symbol as a stop token. However, the original llava model defaults to using "\</s>" as the stop token. In my actual inference process, it seems that encountering "\</s>" typically signals the end of the process. It appears that the new stop token "\<s>" you've added doesn't affect the inference outcome. However, this newly added stopping strategy seems to be suitable only for single-sample inference. Could you please clarify if I can safely remove this parameter? Looking forward to your response. Thank you!"
Hi, @sexan I’m currently trying to reproduce the mAP metrics on COCO and have encountered some issues. Could you please share your evaluation code and scripts? I would greatly appreciate the opportunity to reference your work. Thank you very much for your help!
Hi @sexan You can also customize it in the initialization of stopping_criteria and remove the \<s>.
Hi! Thanks for sharing your excellent work. I have some questions abount batch inference and evaluation for coco.
First question is abount batch inference. I have try to evaluate refcoco and coco by per sample inference using your model, but it is too slow, abount 2 hours for refcoco and 10 hours for coco in A100. In order to inference quickly, I try to batch inference, and question is here, when padding left for short sample, the inference result is different from no padding, It's very strange. Have you encountered this promblems?
Second question is abount evaluation for coco. I have written an evaluation script based on the calculation method of metrics given in your paper, and choosed a prompt template from your papers. Finally, the metrics I get is higher then your paper, I want to know if there is a problem with my evaluation script. Can you release your evaluation script? Thanks!