Closed strich closed 1 year ago
Yes, any tokens that would push the prompt over the limit are automatically culled.
Ideally, this would be easier to customize, since you may want to do things like history summarization.
But this library is really aimed at being a simple wrapper around OpenAI's chat completion APIs, so the current logic is hard-coded.
Closing for now as it's less of an issue and more of a question and/or feature request. Feel free to continue the conversation in our discord https://www.chatgpthackers.dev
Describe the feature
Based ona cursory glance at the code it seems that chatgpt-api doesn't cull old messages/text from conversations when they hit the token limits. Is this correct? If so, how do we manage this?