Closed giantamoeba closed 1 month ago
I'll do some tests, thank you for your suggestion.
I'll add it in a future release. Thank you.
Thank you for that, I'll be looking forward to it.
@giantamoeba please, try the pre release version 2.2.0pre1 from here: https://github.com/micz/ThunderAI/releases
I've added a new "OpenAI Comp API" integration method. Thank you.
Hello! @micz I believe an "OpenAI Compatible" option should also permit to set an API key (=Bearer token), analogously to what stock "OpenAPI" permits/requires, but it seems that is not configurable at the moment. Could this be added? I'm happy to test (against Ollama / Open WebUI for example) if this helps.
I could add it as an optional parameter, I've filed #144. I'll get back to you on this issue.
You can find here version 2.2.0_oai_comp_key_v1.
Please follow this steps:
In this version, with the OpenAI Comp API, if an API key is present in the options, it will be used. @dguembel-itomig let me know how it goes!
Huge thanks @micz I tried out!
Working:
Not working:
I looked at the debug log and the network traffic. There's a POST request to the API that looks correct to me and gets answered with 200 OK, but the response seems to be empty (?) leading to the following entries (error is the last) in the console:
[ThunderAI Logger | mzta-popup] Preparing data to load the popup menu: undefined mzta-logger.js:35:44 [ThunderAI Logger | mzta-popup] _prompts_data: [{"id":"prompt_classify","label":"Klassifizieren","type":"0"},{"id":"prompt_reply","label":"Darauf antworten","type":"1"},{"id":"prompt_rewrite_formal","label":"Formal umschreiben","type":"2"},{"id":"prompt_rewrite_polite","label":"Höflich umschreiben","type":"2"},{"id":"prompt_summarize_this","label":"Zusammenfassen","type":"0"},{"id":"prompt_this","label":"Auffordern","type":"2"},{"id":"prompt_translate_this","label":"Übersetzen","type":"0"}] mzta-logger.js:35:44 [ThunderAI Logger | mzta-popup] active_prompts: [{"id":"prompt_classify","label":"Klassifizieren","type":"0"},{"id":"prompt_reply","label":"Darauf antworten","type":"1"},{"id":"prompt_summarize_this","label":"Zusammenfassen","type":"0"},{"id":"prompt_translate_this","label":"Übersetzen","type":"0"}] mzta-logger.js:35:44 [ThunderAI Logger | mzta-popup] tabType: mail mzta-logger.js:35:44 [ThunderAI Logger | mzta-popup] filteredData: [{"id":"prompt_reply","label":"Darauf antworten","type":"1"},{"id":"prompt_classify","label":"Klassifizieren","type":"0"},{"id":"prompt_translate_this","label":"Übersetzen","type":"0"},{"id":"prompt_summarize_this","label":"Zusammenfassen","type":"0"}] mzta-logger.js:35:44 [ThunderAI Logger | mzta-background] Executing shortcut, promptId: prompt_summarize_this mzta-logger.js:35:44 [ThunderAI Logger | mzta-background] [ThunderAI] Prompt length: 564 mzta-logger.js:35:44 [ThunderAI Logger | mzta-background] [openai_comp_api] prefs.chatgpt_win_width: 700, prefs.chatgpt_win_height: 800 mzta-logger.js:35:44 Already stopped listening to websocket events for this window. websockets.js:82:17 Already stopped listening to server sent events for this window. server-sent-events.js:86:17 [ThunderAI Logger | mzta-background] [OpenAI Comp API] Connection succeded! mzta-logger.js:35:44 SyntaxError: JSON.parse: unterminated string at line 1 column 127 of the JSON data
I have recreated (with postman) the same request using the same payload. The server does answer (the answer is not(!) empty), but it streams its responses word by word. Might that be a problem? Example: The response contains many lines that look like this:
data: {"id":"chatcmpl-495","object":"chat.completion.chunk","created":1727861972,"model":"llama3.2:latest","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"Hier"},"finish_reason":null}]}
It ends with
data: [DONE]
at line 284 (approx), so it looks OK to me.
How can I best investigate further and provide useful information?
The lines are correct, it's a streamed response.
May you tell me which Operative System and Thunderbird version are you using? May you give me a link to download the local LLM you're using?
I am on Ubuntu 24.04 LTS. Thunderbird is 128.2.3esr (64-Bit). I am running against an instance of Open-Webui (latest, = 0.3.30) with an Ollama (latest, = 0.3.12) underneath. The LLM I am running against is llama3.2, from here: https://ollama.com/library/llama3.2
Does that help, or do you need other information? Thank you again for all you efforts, I appreciate it!
Does it happen all the time?
May you try this version? thunderai-v2.2.0_i145_v3.zip
It adds more logs and it should try to manage broken chunks. I'm not able to reproduce the problem, so may you post the logs also if it works? (I'm optimistic :sweat_smile:) Thank you.
Thank you for the fast answer and the new version (2.2.0! I've tried it out, and now it works. Please find attached the log file. If there's anything else I can do to help, please feel free to get back to me. Thank you again for your quick answer.
P.S. Yes, it happened all the time, I never got an answer back in the ThunderAI GUI. I was thinking about a problem on the olllama/webgui side, but If there's one, I do not see what it would have been, the responses looked correct. Do you have an Idea?
[attachment removed]
Thank you @dguembel-itomig for your feedback. May you try this version? I've added more logs. thank you.
Please, clear the console before using the addon, so you'll send only the related log. I deleted the attachment in the previous comment because it seems to contain sensitive information.
Hello @micz thank you for being so vigilant. I looked for API keys in the logs (were not there) and missed the rest :-|
Re-tested (I cleared the log, took the github notification mail informing me of your previous comment, and asked the AI to summarize - waited for the result which does indeed come as desired, then exported log to file). Hope this helps you, please let me know if I can be of further assistance. console-export-2024-10-3_19-22-50.txt
I'll add this code to version 2.2.0. I'll ask you to test the final 2.2.0 pre-release to ensure everything works. Thank you.
I've released version 2.2.0pre5, may you test it? See the changelog: https://github.com/micz/ThunderAI/releases/tag/v2.2.0pre5 Thank you.
Installed and tried out - seems to be working very well so far. No hangs, all answers correctly received and displayed. Will let you know if I stumble across anything, but likely not before early next week. Thank you already!
Available in version 2.2.0.
Is your feature request related to a problem? Please describe. I am still undecided between Ollama and LM-Studio, each has its pros and cons, however LM-Studio imitates the OpenAI-API; therefore it should be easy to support.
Describe the solution you'd like An additional field in the ChatGPT OpenAI API for setting the ip and port, instead of hardcoding them. That would be enough to work with LM-Studio or any other alternative OpenAI-compatible API.