Closed sheenhx closed 1 year ago
We have an extensive compression strategy to deal with token limitation, see: https://github.com/Codium-ai/pr-agent/blob/main/PR_COMPRESSION.md
In practice, for small-to-medium PRs, our compression strategy works well with a limitation of 8K tokens. For very large PRs (tenths of files), some files may be overlooked.
You can always switch to a model with a larger context, like 'gpt-3.5-turbo-16k-0613' or 'gpt-4-32k'.
hey is there anyway to deal with token limitation