yGuy / chatgpt-mattermost-bot

A very simple implementation of a service for a mattermost bot that uses ChatGPT in the backend.
MIT License
145 stars 50 forks source link

bot stops replying #25

Closed rp1783 closed 1 year ago

rp1783 commented 1 year ago

Had this up and running and just started receiving this error:

{\"role\":\"user\",\"content\":\"hey are you still there? \"}],\"model\":\"gpt-3.5-turbo\",\"max_tokens\":2000}","url":"https://api.openai.com/v1/chat/completions"},"status":400}}

Any idea what might be causing this?

yGuy commented 1 year ago

'could be just a hickup at OpenAI. Hard to tell with the information. If this persists, please enable debug log and check whether you can use the OpenAI API, at all (manual invocations from the command line).

stuartConnolly commented 1 year ago

This happens to me too. Set up a new test bot last night, was working fine. Logged back on this morning and it wouldn't respond.

As soon as I restarted my docker container it worked again.

Any ideas why it goes asleep?

yGuy commented 1 year ago

Any ideas why it goes asleep?

No. Sounds like the container stopped or the connection broke. Is there a proxy involved?

Please enable debug log and see whether you can spot anything.

I have the container working for many days now without a manual restart (it's on auto-restart on fail), and I have never seen such an issue.

rp1783 commented 1 year ago

My issue is actually just happening in threads. I attempted to increase the max_tokens, but it didn't help. If I start a new conversation outside of the thread it works fine. Any idea on what might be causing that?

yGuy commented 1 year ago

Probably your thread context is getting too long. The bot by default picks up all the content from the last 24 hours in the thread. If this results in many tokens, this will not only get (relatively) expensive, but might also break. There is currently no logic to restrict the number of tokens and that's probably when it fails, silently. This could be improved, of course, but really should not happen in regular threads that don't have thousands of words of context from the last 24 hours.

yGuy commented 1 year ago

Can you confirm that this has to do with "Long" threads? If so, we can change the issue title and make this an enhancement.