the-bigbrains / review-summarizer

KnightHacks 2023 - 1st Place Winner of Microsoft & RBC's Challenge
1 stars 1 forks source link

More efficient GPT query #2

Closed SimHoZebs closed 11 months ago

SimHoZebs commented 11 months ago

Using gpt-3.5-turbo-16k should allow us to process everything in one query.

Token size can be checked here: https://platform.openai.com/tokenizer

DanyPalma commented 11 months ago

I'll take a look at this soon

SimHoZebs commented 11 months ago

An alternative that I want to suggest:

We can keep the number of GPT calls to three and gather much more than 10 top reviews (probably more like 30) to provide an even more accurate summary. Our result would contain more 'value' this way. The speed won't be an issue, as we can divide the GPT calls across three server requests instead of one and resolve them in parallel. This should effectively process 3 requests at the speed of 1 request.

SimHoZebs commented 11 months ago

closed by 3f304c63264cc602705873f4aa64cfde4dd03c5f, but we should still utilize larger token limit somehow.