Open 0x4007 opened 1 year ago
I'm not sure how to handle long conversations that exceed the context window.
I think we can request several times in case the conversation is too long. We can trim the issue specification (this is called summarizing) to reduce the size of the request, split the comments into convenient sizes and we can send requests several times. I don't see any necessity of sending request exactly once...
Instant runs (with caching if necessary) scales much better.
Yes, so I am talking in only case the conversation is too long...
And I am not sure what you mean by instant runs. Obviously, we need to use GPT API, isn't it? I don't think caching solves multiple request problems
in this case since the issues and comments are all different.
This requires some experimentation but it would make sense to only offer credit to comment incentives that are on topic.
When the comment incentives are being calculated, we should rely on ChatGPT to filter out any comments that are off topic.
In order to save on network requests, we can consider the following:
This could help make dealing with abuse/spam comments hands off and seamless.
I'm not sure how to handle long conversations that exceed the context window.
Perhaps if the network request fails for any reason, we simply assume everything is on topic and calculate as normal (with a warning that the bot could not verify on-topic conversations)