Closed xBelladonna closed 5 months ago
This should be fixed in v0.3.1
The intended behavior was to call ",".join()
when in minimal tool format mode and to call json.dumps()
when using reduced or full tool format mode.
Thanks! I'll give it a try now.
@acon96 I am now getting a new exception with a traceback as follows:
Logger: homeassistant.components.assist_pipeline.pipeline
Source: components/assist_pipeline/pipeline.py:994
integration: Assist pipeline (documentation, issues)
First occurred: 3:19:56 AM (1 occurrences)
Last logged: 3:19:56 AM
Unexpected error during intent recognition
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 994, in recognize_intent
conversation_result = await conversation.async_converse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/conversation/agent_manager.py", line 108, in async_converse
result = await method(conversation_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/agent.py", line 275, in async_process
message = self._generate_system_prompt(raw_prompt, llm_api)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/llama_conversation/agent.py", line 709, in _generate_system_prompt
self._format_tool(*tool)
TypeError: LocalLLMAgent._format_tool() takes 4 positional arguments but 16 were given
Also, I had to edit const.py
and change DEFAULT_GBNF_GRAMMAR_FILE = "output.gbnf"
to DEFAULT_GBNF_GRAMMAR_FILE = "json.gbnf"
in order for the conversation agent to load at all, despite the fact that I have json.gbnf
in the field when configuring the conversation agent in the UI, and I have GBNF grammer disabled anyway.
Do you have "script" type entities exposed to the model? Also I have no idea how I deleted output.gbnf
from the repo... re-adding it now.
Edit: I think I got the _format_tool issue fixed in the develop
branch. Still looking at why it would be ignoring the GBNF settings from the UI.
Yes, I have scripts exposed. Is there currently a bug with those, and should I try un-exposing them to test?
Yeah I just fixed it. There was an if statement I forgot to update that was specific to script entities. The fix is in develop
if you want to install that version.
Given this plus the missing output.gbnf I'll probably push another update today if you want to just wait for that. I need to fix the test cases before I do that so I hopefully find any remaining issues before trying to do that release.
Thanks so much for the blazing fast responses to this! It's a very big update and I can imagine there will be some teething issues here and there. Once everything is worked out this will be an extremely flexible and versatile extension!
Once everything is worked out this will be an extremely flexible and versatile extension!
That's the goal 😄 . Closing this as fixed now that v0.3.2 is out.
Describe the bug
I've recently upgraded to release v0.3, running Home Assistant 2024.6.1 in a Docker container. I have reconfigured my models as per the documentation and reading through the source code to understand what isn't quite documented. I am using the generic OpenAI-compatible API option, with LocalAI, and it worked with v0.2.17. However, with v.0.3, I cannot use the conversation agent, as I consistently receive an error in the logs that halts the pipeline. The error includes a traceback that appears to narrow down the issue, which I have included below. The issue appears to be related to construction of the tools list using the join() function, which fails due to receiving the wrong object type (dict instead of string).
Expected behavior
I expect the Assistant pipeline to engage the conversation agent and send an API request to the LocalAI backend. Instead it appears there is a bug in the method that generates the prompt, so an API request cannot be constructed.
Logs
Extra Information I have narrowed down this issue to some code in the
_format_tool
method inagent.py
. https://github.com/acon96/home-llm/blob/474cb9a18346dcca6a4b9b8e97c8698a653c748d/custom_components/llama_conversation/agent.py#L503 This defines the Minimal Function Style Format, which returns a string.However the other formats below this one (e.g. the Reduced JSON Tool Format) https://github.com/acon96/home-llm/blob/474cb9a18346dcca6a4b9b8e97c8698a653c748d/custom_components/llama_conversation/agent.py#L524 and the Full JSON Tool Format https://github.com/acon96/home-llm/blob/474cb9a18346dcca6a4b9b8e97c8698a653c748d/custom_components/llama_conversation/agent.py#L537 These directly returns dictionaries constructed by dict comprehension. The
join()
function does not accept a dict as it is not an iterable. So I suppose we need to implement something that flattens the dicts or find a conditional way to process the dicts with something other thanjoin()
.I will try to code something and open a PR at some point soon. If anyone else has pointers or is working on a fix that would make mine redundant, please let me know :3