cogentapps / chat-with-gpt

An open-source ChatGPT app with a voice
https://www.chatwithgpt.ai
MIT License
2.3k stars 489 forks source link

openAI response cost more than 30 seconds, timeout setting should be a configurable #137

Open ufoe opened 1 year ago

ufoe commented 1 year ago

openAI response cost more than 30 seconds, timeout setting should be a configurable 。

server can set a more than 30 seconds timeout, users can set their own timeout limit

image
PalAditya commented 1 year ago

This seems like a property that can be defaulted to 30s and be allowed to set in https://github.com/cogentapps/chat-with-gpt/blob/main/server/src/config.ts. We can also read it and save it in the device, just like the openAPI key is read + saved. Happy to help implement it if this seems useful (and if I understood the process correctly 😅 )

ufoe commented 1 year ago

This seems like a property that can be defaulted to 30s and be allowed to set in https://github.com/cogentapps/chat-with-gpt/blob/main/server/src/config.ts. We can also read it and save it in the device, just like the openAPI key is read + saved. Happy to help implement it if this seems useful (and if I understood the process correctly 😅 )

yes, it will be usefull if i can set the timeout in config.yml like openai key

run it on docker docker pull ghcr.io/cogentapps/chat-with-gpt:release & docker run

i can't change the timeout .

ufoe commented 1 year ago

btw i think it will be better to export the elevenlabs enable/disable setting in the config.yml.