Open zhangliang-04 opened 8 months ago
Sorry, it seems that I made a mistake of the code, I just update it, can you try this version? Use test.sh , which use single_turn_eval_multitask.py. for chartqa:test_all.json, you can refer the dataset class in single_turn_eval_multitask.py to make this json, it is easy
you alse need to fix some paths in single_turn_eval_multitask.py including the images directory and the annotations(QA.json)
I just upload the test.json for chartqa, which is accessory/test_all1.json. It is converted by the original chartqa repo, I just merge the human and the augment QAs.
Much thanks! I will try it later.
Thank you for the open sourcing! I want to reproduce the performance of ChartQA of ChartAst-S. I notice there is a yaml file named
./chart_multitask_mixed_othertypebasetype.yaml
in the inference scriptaccessory/exps/finetune/mm/test.sh
and cannot find it anywhere. How about its content if I am going to inference on ChartQA? In addition, are the evaluation results of ChartQA in the paper produced by this code./accessory/eval_mm/evaluate.py
?