Closed SimHoZebs closed 11 months ago
I'll take a look at this soon
An alternative that I want to suggest:
We can keep the number of GPT calls to three and gather much more than 10 top reviews (probably more like 30) to provide an even more accurate summary. Our result would contain more 'value' this way. The speed won't be an issue, as we can divide the GPT calls across three server requests instead of one and resolve them in parallel. This should effectively process 3 requests at the speed of 1 request.
closed by 3f304c63264cc602705873f4aa64cfde4dd03c5f, but we should still utilize larger token limit somehow.
Using gpt-3.5-turbo-16k should allow us to process everything in one query.
Token size can be checked here: https://platform.openai.com/tokenizer