Closed ggerganov closed 1 year ago
Going to attempt this, should it be in a separate folder (ex. examples/jeopardy) or should it be thrown directly in examples?
In examples/jeopardy
, with a README with instructions, question files, eval scripts, plot scripts, etc. would be nice
I extracted the data from the spreadsheet. I don't know if it is necessary to reverse the Jeopardy "answer in the form of a question". I think LLMs should be able to emulate it.
I got a notebook here where I use Langchain to create prompts to evaluate the quiz data.
It looks like I was right in that the models can just play Jeopardy without needing to change the data.
https://gist.github.com/SlyEcho/a1e6ac9e44eb48a6769b44b61050b635
I was browsing reddit and saw this post:
https://www.reddit.com/r/LocalLLaMA/comments/12xkm9v/alpaca_vs_final_jeopardy/
If anyone is interested, it would be great to add such evaluation as an example to
llama.cpp
and add instructions for running it with different models: LLaMA, Alpaca, Vicuna, etc. and different quantizations.Here is the original work by @aigoopy which can be a good starting point:
https://github.com/aigoopy/llm-jeopardy