Closed aopolin-lv closed 10 months ago
Hi @Aopolin-Lv,
Apologies for the late reply. Note that the question-answer pairs (gt_file) are the same for correctness, detailed orientation and Contextual understanding.
So, In your case, you want to evaluate the correctness and detailed orientation criteria:
If you run the first step using the below command: the predictions are stored in --output_dir <output-dir-path>
.
python video_chatgpt/eval/run_inference_benchmark_general.py \ --video_dir <path-to-directory-containing-videos> \ --gt_file <ground-truth-file-containing-question-answer-pairs> \ --output_dir <output-dir-path> \ --output_name <output-file-name> \ --model-name <path-to-LLaVA-Lightening-7B-v1-1> \ --projection_path <path-to-Video-ChatGPT-weights>
In order to evaluate the criteria, you will need to pass the same '--output_dir
Hope its clear now.
Hello, follow the instructions of step 1 in quantitative_evaluation, I obtain three files:
generic_qa.json
and run_inference_benchmark_general.pyconsistency_qa.json
and run_inference_benchmark_consistency.pytemporal_qa.json
and run_inference_benchmark_general.pyThen, do I need to generate any other file? And how does them function in step 2? More specifically, if I want to evaluate correctness and detailed, which file generated in step1 should I input to the
pred_path
in step 2 command?