Closed cramraj8 closed 1 year ago
Hi @cramraj8 Thank you for the interest in our work! InPars-v1 uses GPT-3’s Curie as the LLM to generate synthetic queries. However, in the second version, InPars-v2, we use EleutherAI/gpt-j-6b, an open-source LLM, to generate the queries.
InPars-v2 paper shows the comparison between v1 and v2 when evaluating on BEIR benchmark.
That makes sense. Thanks for the prompt reply !
Hi, in your scripts the default LLM is set to EleutherAI/gpt-j-6b. And I wonder what's the purpose of it even though you mentioned Curie model is used for data generation in the paper. If you run results with both models, can you please share the benchmark comparison ?