Closed nikito closed 4 months ago
What are you using as your backend? It isn't responding with the correct value for the object field. It should be chat.completion
Did some debug against the endpoint I am using and it seems it is returning text_completion instead of chat.completion, so think the issue is on the other side. Thank you for the info, closing out as I don't think the issue is on this side. 🙂
Did some debug against the endpoint I am using and it seems it is returning text_completion instead of chat.completion, so think the issue is on the other side. Thank you for the info, closing out as I don't think the issue is on this side. 🙂
Feel free to open an issue to add support for that backend. It isn't too hard to have slightly different behavior from the OpenAI spec.
When setting the integration to use the Chat Completions Endpoint, I receive this error: Traceback (most recent call last): File "/config/custom_components/llama_conversation/init.py", line 296, in async_process response = await self._async_generate(conversation) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/llama_conversation/init.py", line 231, in _async_generate return await self.hass.async_add_executor_job( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/llama_conversation/init.py", line 633, in _generate return self._extract_response(result.json()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/llama_conversation/init.py", line 588, in _extract_response return choices[0]["text"]