OpenGVLab / ChartAst

ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.
Other
105 stars 8 forks source link

Issues about evaluating on ChartQA #6

Open zhangliang-04 opened 8 months ago

zhangliang-04 commented 8 months ago

Thank you for the open sourcing! I want to reproduce the performance of ChartQA of ChartAst-S. I notice there is a yaml file named ./chart_multitask_mixed_othertypebasetype.yaml in the inference script accessory/exps/finetune/mm/test.sh and cannot find it anywhere. How about its content if I am going to inference on ChartQA? In addition, are the evaluation results of ChartQA in the paper produced by this code ./accessory/eval_mm/evaluate.py?

FanqingM commented 8 months ago

Sorry, it seems that I made a mistake of the code, I just update it, can you try this version? Use test.sh , which use single_turn_eval_multitask.py. for chartqa:test_all.json, you can refer the dataset class in single_turn_eval_multitask.py to make this json, it is easy

FanqingM commented 8 months ago

you alse need to fix some paths in single_turn_eval_multitask.py including the images directory and the annotations(QA.json)

FanqingM commented 8 months ago

I just upload the test.json for chartqa, which is accessory/test_all1.json. It is converted by the original chartqa repo, I just merge the human and the augment QAs.

zhangliang-04 commented 8 months ago

Much thanks! I will try it later.