nextcloud / integration_openai

OpenAI integration in Nextcloud
GNU Affero General Public License v3.0
51 stars 11 forks source link

ChatGPT-Like Answer not possible anymore #60

Closed manuelkamp closed 10 months ago

manuelkamp commented 10 months ago

Which version of integration_openai are you using?

1.1.1

Which version of Nextcloud are you using?

27.1.3

Which browser are you using? In case you are using the phone App, specify the Android or iOS version and device please.

Edge, Firefox latest (not relevant)

Describe the Bug

"Completion request error: Unknown text generation error" Error shows up every time i want to use ChatGPT-Like Answers through my LocalAI. LocalAI itself is working fine, tried it via ssh-curl on my local LocalAI instance and on my remote nextcloud instance. The issue has to be this plugin. I did not change any settings since I set it up last week. However i saw that it did upgrade the app (automatically) from 1.0.13 to 1.1.1. However, Image generation via my LocalAI works fine via this app.

Expected Behavior

Get a response from LocalAI

To Reproduce

Open textfile, type / and select ChatGPT-Like answer and type anything in it. it results in the error shown in the right upper corner. Error in logfile: {"reqId":"ZVDYoqeILvOTuLnCwMkWEQAAAAI","level":2,"time":"2023-11-12T13:52:34+00:00","remoteAddr":"10.0.0.1","user":"manuel","app":"integration_openai","method":"POST","url":"/apps/integration_openai/completions","message":"Text generation error: {\"created\":1698852566,\"object\":\"text_completion\",\"id\":\"b36ddd74-a267-412b-b50d-7a50865e5447\",\"model\":\"nous-hermes-llama2-13b\",\"usage\":{\"prompt_tokens\":0,\"completion_tokens\":0,\"total_tokens\":0}}","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Edg/119.0.0.0","version":"27.1.3.2","data":{"app":"integration_openai"},"id":"6550da68ef49e"}

MB-Finski commented 10 months ago

Thanks a ton for the report! I found the issue causing the request to the completions endpoint to be broken. I'll get the fix published ASAP. In the meanwhile you can bypass this by tricking the front-end to send the requests to the chat/completions endpoint by having the model name start with 'gpt-'.

MB-Finski commented 10 months ago

But then again, maybe we should use the chat/completions endpoint by default since this will automatically inject the proper prompt template for the model in the llm backend which would improve the answer quality for most models. Hmm...

MB-Finski commented 10 months ago

v1.1.2 should be in the app store momentarily. Please, let me know if the issue persists! Sorry for the inconvenience!

BTW; you can now choose the endpoint in the admin settings. Some models/setups may benefit from using the chat/completion endpoint.

manuelkamp commented 10 months ago

Thanks, update popped up and works fine again! Just for others, after this update of the app the model to use in the chat-gpt like answer dialog was set to a wrong one after the app update, so you (and your users) may have to change it back manually once.