gkamradt / LLMTest_NeedleInAHaystack

Doing simple retrieval from LLM models at various context lengths to measure accuracy
Other
1.45k stars 150 forks source link

Question: Can the Haystack have variations? #44

Open BradKML opened 5 months ago

BradKML commented 5 months ago

Since most of the needle in a "haystack" test injects a line into a pre-defined book text (that can be part of the original dataset), it can be hypothesized that the LLM simply is "smelling" for something that does not fit the context. So, is it possible to create a "haystack" that is a mix of multiple articles, or just a list of one-liners such that it cannot guess?

gkamradt commented 5 months ago

Hey! You are able to specify the needle, the question, and the background context, so you can choose whatever you want

We did this to enable others to do their own context

BradKML commented 5 months ago

What would you consider "fair" conditions for ad-hoc generation of haystack vs needle? Would there be tools to help with the randomized construction of the haystack (and maybe averaged performance of multiple tests)?

Bonus question: can this be used to evaluate FOSS models as well (esp. those without OpenAI APIs)? Would Ollama or similar do the job?

gkamradt commented 5 months ago

All those questions you asked are great research questions and I haven't seen anyone dig into them rigorously yet

Yep you can definitely test out other models