ShishirPatil / gorilla

Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
https://gorilla.cs.berkeley.edu/
Apache License 2.0
11.2k stars 934 forks source link

Leaderboard questions #235

Open vandyxiaowei opened 6 months ago

vandyxiaowei commented 6 months ago

Hi, thanks for putting up the leaderboard, which is very useful and informative!

I have some questions on the leaderboard:

  1. I didn't find the code on how to execute the predicted function calls for the eval under executable category. the executable_checker seems to work with execution results.
  2. how the summary number (used for ranking) is computed from the scores of different categories?
  3. which dataset is used to produce relevant detection results? the one with relevance or the one with no_function_call?
  4. there seems to have some GT errors in the parallel_function answers, which only contain single function call. Here are the list of problematic indices: 14, 18, 19, 23, 87, 114, 139, 168. Could you please double check?

Thanks a lot!

Fanjia-Yan commented 6 months ago

Hey Xiaowei, let me try to answer your question as much as possible:

  1. At berkeley-function-leaderboard/data/function/gorilla_openfunctions_v1_test_function.py, we have had all the executable functions ready. When we are evaluating the results, we import all the functions and exec() the function calls. We will have two types of results : (1) definitive result: meaning that those result will not change subject to time. Then we do a exact match (2) real-time result: for those result that will be changing subjected to time, we will perform a type match.

  2. For the leaderboard ranking on the blog post, we compute the result as (num_of_entries_per_category * accuracy_per_category)/total_entries All categories are included except for "SQL" and "Chatable" which we generate to experiment.

  3. Sorry for the confusion. The no_function_call is renamed to relevance as of now since we believe it's better representative of what we are testing. Those two should have same content but you should use the "relevance" one.

  4. Thank you for pointing this out. I will perform a manual check on the indices you are mentioning. I will get back to you in this PR.

vandyxiaowei commented 6 months ago

Fanjia, thanks for the response! Add some comments inline.

Hey Xiaowei, let me try to answer your question as much as possible:

  1. At berkeley-function-leaderboard/data/function/gorilla_openfunctions_v1_test_function.py, we have had all the executable functions ready. When we are evaluating the results, we import all the functions and exec() the function calls. We will have two types of results : (1) definitive result: meaning that those result will not change subject to time. Then we do a exact match (2) real-time result: for those result that will be changing subjected to time, we will perform a type match.

Is this execution script also in the repo?

  1. For the leaderboard ranking on the blog post, we compute the result as (num_of_entries_per_category * accuracy_per_category)/total_entries All categories are included except for "SQL" and "Chatable" which we generate to experiment.

Do we also include REST, Java, JavaScript tests? I don't see those numbers in the table.

  1. Sorry for the confusion. The no_function_call is renamed to relevance as of now since we believe it's better representative of what we are testing. Those two should have same content but you should use the "relevance" one.
  2. Thank you for pointing this out. I will perform a manual check on the indices you are mentioning. I will get back to you in this PR.

Two more questions:

  1. I see Gemini pro results have been put to the leaderboard. But I haven't seen the code checked in yet.
  2. For each test example, there is a "human_eval_answer" field. Are they the same as the ones the possible_answer folder?

Thanks!

vandyxiaowei commented 6 months ago

Some additional minor data issues: In the single_function (generic) test cases, index 337, the cards parameter in poker_game_winner function definition is not complete. "cards": {"type": "object", "description": "An object containing the player name as key and the cards as values in a list."}

index 297, the question is same as the target "question": "music.theory.chordProgression(progression=['I', 'V', 'vi', 'IV'])",

In the possible_answers files, some function names have the "_1" suffix. Is this expected?

HuanzhiMao commented 5 months ago

Hi XiaoWei,

We just released the updated evaluation pipeline, so let me answer your concerns with our new code.

In the possible_answers files, some function names have the "_1" suffix. Is this expected?

This is expected. We include the _1 prefix in possible answers (especially for multiple function categories) to make the function name unique. Because the possible answer is a dict, where the key is the function name, so we have to use the suffix to make them unique. However, this will not affect the accuracy, because in checker.py, I have a helper called find_description that handles this. Specifically, line 39 solves this issue (note that name is a string, which could contain the suffix). image

For each test example, there is a "human_eval_answer" field. Are they the same as the ones the possible_answer folder?

This is an oversight on our end. The human_eval_answer is outdated and we are not using it in the evaluation process. We only use the possible_answer as the ground truth. (Thus, some human_eval_answer are wrong and misaligned with possible_answer, sorry about that.) We have removed the human_eval_answer in our April 1 release.

Do we also include REST, Java, JavaScript tests? I don't see those numbers in the table.

Yes, we do include those tests. REST is part of simple executable, Java and JavaScript are both part of simple AST. You can refer to our blog for the composition of each category on our leaderboard. And also, with the new evaluation pipeline, a data.csv will be generated at the end. We detailed the score in each test category in there. So you will see the Java, JavaScript, REST scores along with simple executable and simple AST score. That data.csv is more detailed than our leaderboard. image

Is this execution script also in the repo?

In checker.py, you will find two functions. executable_checker is the script for non-REST executable categories, and executable_checker_rest is for the REST executable category. They are chained together in the single_executable_file_runner function in eval_runner.py, screenshot attached. We use the test category to decide which function to call. image

Please let me know if you have more questions. Happy to answer them.