shikiw / OPERA

[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
MIT License
244 stars 22 forks source link

Over-Trust Logit Penalty #17

Closed hubujy closed 3 months ago

hubujy commented 4 months ago

Thank you for this amazing paper! I have initially checked your code file, but I did not find the code content of the specific implementation of Over-Trust Logit Penalty. Could you please provide the location of the relevant code? Thank you very much for your help.

shikiw commented 4 months ago

Thanks for your appreciation.

The implementation of OPERA is located at transformers-4.29.2/src/transformers/generation/utils.py. You can find the function opera_beam_search and we do over-trust logit penality at Line 3449.