OpenBMB / InfiniteBench

Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718
MIT License
274 stars 23 forks source link

Did you use gpt-4-32k for the evaluation? #4

Closed z379035389 closed 11 months ago

z379035389 commented 11 months ago

If so, how do you handle more than 32K tokens?

z379035389 commented 11 months ago

I found you used 128k context window