That's an interesting work, thanks for sharing the code!
I see that you already directed several users to the evaluation procedure of Oscar to generate the json file with image captions. I tried that but it requires many adaptations, as the models and the input data are very different. It seems a delicate merge, rather than just running another script.
Am I missing something? did you use the run_captioning.py script with the evaluation flag?
Hi,
That's an interesting work, thanks for sharing the code!
I see that you already directed several users to the evaluation procedure of Oscar to generate the json file with image captions. I tried that but it requires many adaptations, as the models and the input data are very different. It seems a delicate merge, rather than just running another script. Am I missing something? did you use the run_captioning.py script with the evaluation flag?
Thanks