Closed ed12ed closed 1 year ago
A single request CAN'T exceed the model's max token count, this program is NOT bypassing the token limit!
A single request CAN'T exceed the model's max token count, this program is NOT bypassing the token limit!
But you wrote: "Long term memory support! Keep hitting the 4096 tokens context limit? Worry no more with this CLI Bot. It has nearly INFINITE context memory(If you have infinite disk space lol), all thanks to Embeddings!" So I was hoping the CLI Bot could reduce the nb of token needed to summarize a texte given in multiple prompts...
A single request CAN'T exceed the model's max token count, this program is NOT bypassing the token limit!
But you wrote: "Long term memory support! Keep hitting the 4096 tokens context limit? Worry no more with this CLI Bot. It has nearly INFINITE context memory(If you have infinite disk space lol), all thanks to Embeddings!" So I was hoping the CLI Bot could reduce the nb of token needed to summarize a texte given in multiple prompts...
It doesn't work like what you think, you may want to check this explanation, or this. It CAN'T be used to summarize large texts/documents, because it's still limited by the 4095 max token count PER REQUEST!
Closed as no response.
Ok thank you, the example you provided there (https://github.com/LagPixelLOL/ChatGPTCLIBot/issues/1) is clear. It works very well. I asked the bot to tell me a story where I meet some people. After that it remembers people I met and things they said.
I don't understand why you say your bot has a big memory, because when I submit a text in small parts I keep hitting the max token limit... For example: "Error when calling API: Max tokens exceeded in messages: 4103 >= 4095"