Deaddawn / MovieLLM-code

MIT License
165 stars 9 forks source link

Will the prompt to generate the evaluation benchmark be released? #4

Open Richar-Du opened 7 months ago

Richar-Du commented 7 months ago

Thanks for your awesome work! Section 4.2 states that you present a long video understanding benchmark, would you please release the prompt the generate questions and answers for this benchmark? Thanks in advance:)

Deaddawn commented 7 months ago

Thanks for your awesome work! Section 4.2 states that you present a long video understanding benchmark, would you please release the prompt the generate questions and answers for this benchmark? Thanks in advance:)

Hi, refer to here https://github.com/Deaddawn/MovieLLM-code/blob/main/eval_movie_qa.py

Richar-Du commented 7 months ago

Thanks for your reply. However, https://github.com/Deaddawn/MovieLLM-code/blob/main/eval_movie_qa.py only has the prompt for evaluation. Would you please release the prompt to generate the questions and answers in the benchmark, as well as the prompt to generate the QA pairs in the instruction dataset?

Deaddawn commented 7 months ago

Thanks for your reply. However, https://github.com/Deaddawn/MovieLLM-code/blob/main/eval_movie_qa.py only has the prompt for evaluation. Would you please release the prompt to generate the questions and answers in the benchmark, as well as the prompt to generate the QA pairs in the instruction dataset?

Hi, for prompt used to generate qa in benchmark, please refer to https://github.com/Deaddawn/MovieLLM-code/blob/main/prompt_bench.txt. For qa in instruction dataset, it will be released along with the code for pipeline.

Richar-Du commented 7 months ago

Great! thanks for your quick reply. Looking forward to the prompt for instruction generation :)

Deaddawn commented 7 months ago

Great! thanks for your quick reply. Looking forward to the prompt for instruction generation :)

No problem, stay with us for other code O(∩_∩)O