Open JasonZhu1313 opened 3 months ago
Hi Jason, thanks for your interest in Berkeley Function-Calling Leaderboard!
Responding to your question, there are two things we want to raise. First, we noticed that you added special tokens in places, namely at the start of the system prompt and user prompt <s>
[INST]
, and [/INST]
.
system = "<s>[INST] You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: [/INST]"
return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: [/INST]"
We provide get_prompt
function in repo and huggingface, consistent with our hosted endpointβs inference, and defined as follows. Please use our official get_prompt
method in your oss evaluation for replication for consistency in evalution. Thank you!
def get_prompt(user_query: str, functions: list = []) -> str:
"""
Generates a conversation prompt based on the user's query and a list of functions.
Parameters:
- user_query (str): The user's query.
- functions (list): A list of functions to include in the prompt.
Returns:
- str: The formatted conversation prompt.
"""
system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
if len(functions) == 0:
return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
functions_string = json.dumps(functions)
return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: "
Secondly, our oss_handler
is not intended to evaluate gorilla-openfunctions-v2
(vllm hosted) at the moment, since we inference oss modelsβ responses through a format that is not consistent with that of gorilla-openfunctions-v2
. We support evaluation on our hosted endpoint through gorilla_handler
(through API calls to our hosted endpoint), thus one should use the decode_ast
, decode_execute
defined in gorilla_handler
to parse gorilla responses, instead of those defined in oss_handler
.
We will support evaluating vllm hosted gorilla-openfunctions-v2
for reproducibility and then close the issue shortly via a new PR. Thanks again for flagging this to our attention!
@CharlieJCJ @ShishirPatil Hey, thanks for detailed response, I started a PR https://github.com/ShishirPatil/gorilla/pull/360 to support openfunctionv2 handler to address this issue https://github.com/ShishirPatil/gorilla/pull/360, will finish the test hopefully by end of day so we can close this issue.
Describe the bug A clear and concise description of what the bug is.
Great work on gorilla!
I have used the OS model checkpoint https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2 with vLLM to try reproducing the leaderboard score using local GPU inference (4 A100). However, I obtained lower summary AST scores compared to the leaderboard. I am wondering if I am using the wrong prompt template or if I missed something. Your help would be much appreciated.
Evaluation accuracy I got locally:
summary_ast["accuracy"]: 0.38625954198473283 simple_ast["accuracy"]: 0.21875 multiple_ast["accuracy"]: 0.41 parallel_ast["accuracy"]: 0.005 parallel_multiple_ast["accuracy"]: 0.005
To Reproduce Steps to reproduce the behavior: I am using the following OSS handler code
python model_handler/oss_handler.py --data-path /home/jobuser/gorilla/berkeley-function-call-leaderboard/data/BFCL/questions_for_oss.json --model-name /path_to_model/gorilla-openfunctions-v2
python /home/jobuser/gorilla/berkeley-function-call-leaderboard/eval_checker/eval_runner.py --model /path_to_model/gorilla-openfunctions-v2 --skip-api-sanity-check --test-category simple sql rest relevance parallel_multiple_function parallel_function multiple_function
Proposed Solution If you want to suggest a proposed solution or an idea for one? Maybe my prompt template is inconsistent with the one used in training, or I missed to apply the chat template?
Additional context Add any other context about the problem here.