-
Hello,
When you evaluate LVLMs on the generative task, how do you set the parameter "max_new_tokens" or "max_length"?
Maybe it has a big influence on the final results, thank you!
-
Thanks for the awesome project!
I have a few questions:
- I wonder how would VCD perform on LLaVA suite of benchmarks that is not focused on hallucination, e.g., GQA, ScienceQA, TextVQA, etc. Wo…
-
Hi. Thanks for the great work. I tried to prepend and just add the
```
import transformers
from llava.cca_utils.cca import llamaforcausallm_forward, cca_forward
transformers.models.llama.LlamaFo…
-
### Motivation
Recently,there are many good paper that try to alleviating hallucinations for large vision-language models **during the decode process**,like:
OPERA: Alleviating Hallucination in Mu…
zhly0 updated
2 months ago
-
-
A nice work. I would like to ask a question about LURE. LURE needs to mask the object during inference and then correct it. However, POPE and MME are discriminant tasks, using YES/NO to answer questio…
-
-
First and foremost, thank you for writing this paper; it was very intriguing and informative. I have a question that arose during my reading.
What are the conceptual benefits when the supervisor mo…
-
-
Your paper uses the iclr camera ready version, which is incorrect.