Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Apache License 2.0
2.39k
stars
248
forks
source link
any params to constrain the `min_lens` when using `beamsearch` and `unconstrained-training` on vqa finetune and eval? #372
i run the evaluate_vqa_unconstrained.sh, and the answer is all empty string "", the answers in my dataset usually have lengths surpass 200. what should i do to avoid ""?
i've tried to modify max_len_b and min_len in ofa_task.py; max_len_bmax_len and min_len in models/sequence_generator.py and fairseq/sequence_generator.py, while these change only make finetune more time-consuming, the eval.sh is still only generate "".
Thank you for any reply!
Hi, thank you for this great work. I'm tring to run the scripts you provide, but i encounter some questions. after using this finetune script :
i run the
evaluate_vqa_unconstrained.sh
, and the answer is all empty string""
, the answers in my dataset usually have lengths surpass 200. what should i do to avoid""
? i've tried to modifymax_len_b
andmin_len
inofa_task.py
;max_len_b
max_len
andmin_len
inmodels/sequence_generator.py
andfairseq/sequence_generator.py
, while these change only make finetune more time-consuming, theeval.sh
is still only generate""
. Thank you for any reply!