boheumd / MA-LMM

(2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding
https://boheumd.github.io/MA-LMM/
MIT License
221 stars 26 forks source link

Curious about the experimental stability? #18

Closed vhzy closed 3 months ago

vhzy commented 4 months ago

I modified the model and evaluation codes to train and test on the NEXT-QA dataset. However, there is about a 1% point fluctuation in the results during each training run. Are the experimental results stable during the training process? I suspect this variability might stem from certain parameter settings of the large language model (LLM). Do you have any empirical suggestions to address this issue? Thank you!

boheumd commented 4 months ago

Hello, for the 1% point fluctuation of the results, are those for the testing dataset or training dataset?

vhzy commented 4 months ago

Sorry, I re-ran your code and the result is stable. There were some bugs before. Thanks for your reply!

Bo He @.***> 于2024年5月29日周三 22:30写道:

Hello, for the 1% point fluctuation of the results, are those for the testing dataset or training dataset?

— Reply to this email directly, view it on GitHub https://github.com/boheumd/MA-LMM/issues/18#issuecomment-2137560586, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMR5ZOKGNY3AV55W6KY6VCTZEXQ6XAVCNFSM6AAAAABIKLSHPGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZXGU3DANJYGY . You are receiving this because you authored the thread.Message ID: @.***>