acon96 / home-llm

A Home Assistant integration & Model to control your smart home using a Local LLM
491 stars 56 forks source link

When using Chat Completions endpoint, get error about missing key 'text' #72

Closed nikito closed 4 months ago

nikito commented 4 months ago

When setting the integration to use the Chat Completions Endpoint, I receive this error: Traceback (most recent call last): File "/config/custom_components/llama_conversation/init.py", line 296, in async_process response = await self._async_generate(conversation) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/llama_conversation/init.py", line 231, in _async_generate return await self.hass.async_add_executor_job( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/llama_conversation/init.py", line 633, in _generate return self._extract_response(result.json()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/llama_conversation/init.py", line 588, in _extract_response return choices[0]["text"]


KeyError: 'text'

I believe this may be an error in the implementation, as the Chat Completions endpoint no longer uses 'text' but instead uses 'message' per https://platform.openai.com/docs/api-reference/chat/object
acon96 commented 4 months ago

What are you using as your backend? It isn't responding with the correct value for the object field. It should be chat.completion

nikito commented 4 months ago

Did some debug against the endpoint I am using and it seems it is returning text_completion instead of chat.completion, so think the issue is on the other side. Thank you for the info, closing out as I don't think the issue is on this side. 🙂

acon96 commented 4 months ago

Did some debug against the endpoint I am using and it seems it is returning text_completion instead of chat.completion, so think the issue is on the other side. Thank you for the info, closing out as I don't think the issue is on this side. 🙂

Feel free to open an issue to add support for that backend. It isn't too hard to have slightly different behavior from the OpenAI spec.