ShishirPatil / gorilla

Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
https://gorilla.cs.berkeley.edu/
Apache License 2.0
11.44k stars 992 forks source link

GPT4 cutoff date is September 2021 - how did this impact evals? #50

Closed qrdlgit closed 1 year ago

qrdlgit commented 1 year ago

Any new API info would not be in GPT4 training.

How much impact do you think this has with respect to relative performance between GPT4 and Gorilla?

Did you do any eval on APIs that existed prior to 09/21 versus those after?

I reviewed the paper but could not find any discussion on this. https://arxiv.org/abs/2305.15334

To be clear, I am not saying this invalidates the ideas, which I think were a fantastic contribution to OS LLMs, but rather that it would be good to understand the precise reason for the superior performance.

fritzprix commented 1 year ago

@qrdlgit It's basically based on RAG method so don't need to be contained within training set.

qrdlgit commented 1 year ago

Yes, but the paper claims superior performance to GPT4.

I have consistently found that GPT4 hallucinates less on data that it has been trained on. When you add vector retrieval it even does a better job.

For APIs that were added after the cutoff date, it wouldn't be surprising that GPT4 hallucinations would increase.

This might explain why Gorilla can out perform GPT4.

This is not a complaint. Gorilla paper was really great and has lots of fantastic ideas.

I didn't see any discussion of this in the paper. If there was and I missed it, please let me know. I just want to understand.

If you compare performance between Gorilla and GPT4 on APIs that were added after the cutoff date versus ones that came before, what would it look like?

fritzprix commented 1 year ago

They used APIs that has been quite stable a while and I believe not much has been changed after the cut-off of GPT4 pre-training. so the benchmark seems to me fair enough.

qrdlgit commented 1 year ago

Don't take this personally, but I'm not sure you are familiar with these details.

eg, from https://github.com/ShishirPatil/gorilla/blob/main/data/apibench/huggingface_train.json

I found microsoft/xclip-base-patch16-zero-shot which had an initial commit in the last 9 months.

tianjunz commented 1 year ago

@qrdlgit Thank you for your comments! One thing we need to clarify: We don't require GPT-4 to output exactly same API here, as long as the API from GPT-4's output has the same functionality, we count as correct. See the script from here: https://github.com/ShishirPatil/gorilla/blob/main/eval/eval-scripts/ast_eval_hf.py. This has been very consistent from the very beginning.

qrdlgit commented 1 year ago

That answers the question, but not in the way you probably intended - Ie, evals were not done with API dates in mind.

Again, the gorilla is still a great idea and paper. A lot of good takeaways for sure.

However, in the future you probably want to be more careful about data leakage / data contamination issues. This is a problem I'm seeing in a lot of papers coming out recently.

One thing you might want to try is evaluating post cutoff APIs alone. The lack of fine tuning capability on GPT4 and its cutoff date is a significant achilles heel, at least for the moment.

If the performance is even more SOTA, that would be a great example of how using OS LLMs can be superior for certain use cases. GPT4 really is an (obsolete) jack of all trades, master of none.

ShishirPatil commented 1 year ago

Thank you for your question and insightful discussion @qrdlgit and @fritzprix! When it comes to the issue of data contamination, we are completely aligned. We have been cautious to ensure that Gorilla doesn't encounter any of the test set data during its training phase. However, we are unable to provide any comment on the training/test data for models that are closed-source.

Your point about splitting APIs before and after 09/2021 is well taken. As @fritzprix pointed out, we would ideally like to believe that an oracle retriever can address the issue concerning the cut-off date as effectively as possible.

To validate this hypothesis, you can conduct a straightforward experiment - given that our training and evaluation datasets are open-sourced, it should be relatively simple to filter out APIs published post 09/2021 and validate this experiment. If you do end up doing it, please feel free to share the results. We would certainly appreciate such a contribution!

qrdlgit commented 1 year ago

Heh! This is my contribution I'm afraid. g'luck.