xenodium / chatgpt-shell

A multi-llm Emacs shell (ChatGPT, Claude, Gemini, Ollama) + editing integrations
https://lmno.lol/alvaro
GNU General Public License v3.0
862 stars 77 forks source link

Entering 'Please reduce the length of the messages.' loop ? #86

Closed fenix011 closed 1 year ago

fenix011 commented 1 year ago

After a while, chatGPT says :

This model's maximum context length is 4097 tokens. However, your messages resulted in 5017 tokens. Please reduce the length of the messages.

and stops answering my questions but seems to enter an infinite loop repeating the same message again and again instead of ... haha xD

What am i missing ?

Thanks, thanks, thanks

xenodium commented 1 year ago

To have contextual conversations, the history must be sent for every API request. When it reaches past limits, you start getting these errors. It's not straightforward AFAIK to accurately calculate the limit.

Options:

fenix011 commented 1 year ago

Nice to know . Assuming that context is an important factor for chatgpt to provide a more reasoned and accurate answer... ummh. How to avoid clearing the relevant past context ? . I feel like kind of wishlisting now... :-) Is there a way to mantain part of the context (say, by means of deleting the unuseful parts of or something like ? ) ? .

Err... is there a way to mantain different threads of conversations in different shell buffers so that the contexts can be more easily maintained sort of ?

xenodium commented 1 year ago

Is there a way to mantain part of the context (say, by means of deleting the unuseful parts of or something like ? ) ? .

You could chatgpt-shell-save-session-transcript edit the text file and then chatgpt-shell-restore-session-from-transcript but not super practical.

Err... is there a way to mantain different threads of conversations in different shell buffers so that the contexts can be more easily maintained sort of ?

This isn't possible at the moment. Needs work but also not sure how to predictably handle all commands that send queries to the shell from other buffers.

I'm thinking setting chatgpt-shell-transmitted-context-length to a smaller number may just be good enough even if mixing topics as follow-ups typically refer to something in the last few queries.

xenodium commented 1 year ago

@fenix011 I've added a very rough/experimental way (needs more thorough validation)

Try it with:

(setq chatgpt-shell-transmitted-context-length
      #'chatgpt-shell--approximate-context-length)

Feel free to improve chatgpt-shell--approximate-context-length

xenodium commented 1 year ago

@fenix011 have you had a play with this? Worth enabling by default?

xenodium commented 1 year ago

chatgpt-shell--approximate-context-length is now default for a couple of weeks. Not heard issues so far. Gonna close this issue. Lemme know if you run into issues.