Closed lhchew closed 2 months ago
I encountered the same issue with ollama as well. I think it's because the default value for StreamTimeoutSecond
is too short. Is it possible to configure this value?
https://github.com/mattermost/mattermost-plugin-ai/blob/2d13eb69da506ca83e1845bc6f309fc2b42b86f6/server/ai/openai/openai.go#L51
Me too.
How exactly do we solve this?
I know I'm lengthening that time, but each version has a different source, so I don't know what to change. I'm using the latest version of 0.8.
I refer to the configuration file of this plugin, there is a parameter to configure the streaming timeout in seconds.
StreamingTimeoutSeconds int json:"streamingTimeoutSeconds"
https://github.com/mattermost/mattermost-plugin-ai/blob/master/server/ai/configuration.go
After you have enabled this plugin in MatterMost, open up MatterMost configuration file (config.json) and look for the section belonging to this plugin. It should look something like this.
"service":{ "apiKey": , "apiURL": , "defaultModel": , "id": , "orgId": , "password": , "tokenLimit": , "type": , "username": }
Append "streamingTimeoutSeconds": xxx, to "service" object, save and restart MatterMost service. The timeout should be extended to the value configured in the MatterMost config file.
Would be much easier if this field is made configurable in the UI.
There is a UI configuration for this in the latest versions. I think this can be closed? @lhchew please reopen if the timeout configuration is not enough to fix this.
Description
Encountered "timeout streaming" when configured to point to LocalAI. Error is shown below. Works fine when configured to point to OpenAI.
{ "caller": "app/plugin_api.go:1003", "error": "timeout streaming", "level": "error", "msg": "Streaming result to post failed", "plugin_id": "mattermost-ai", "timestamp": "2024-05-04 05:45:23.445 Z" }
Steps to reproduce
Run Mattermost using Azure Container Instance. Image used is mattermost/mattermost-preview.
Mattermost server version is 9.7.3. AI Copilot version is 0.6.3.
Run LocalAI using Azure Container Instance. Image used is localai/localai:latest-aio-cpu.
Able to get chat response from LocalAI using Postman with the following API Post call.
http://xx.xx.xx.xx:8080/v1/chat/completions
{ "model": "gpt-4", "messages": [ { "role": "user", "content": "How are you?" } ] }