jekalmin / extended_openai_conversation

Home Assistant custom component of conversation agent. It uses OpenAI to control your devices.
916 stars 129 forks source link

Device already on, Device already off #112

Closed goofy1988 closed 8 months ago

goofy1988 commented 8 months ago

First...Hello thanks for your great integration.

Sadly I have one issue I cannot solve. When I give the command to turn on a device it does. Next I turn off the device and it does.

But. The next time I want to turn on the device it says the device is already on. Same behavior with turning on the.

I ve replay tried to force him to du it nevertheless in the prompt Template. I even threatened punishment, hehe. Have you ever encountered a same behavior too? Screenshot_20240121-080956_Home Assistant~2 Screenshot_20240121-084342_Home Assistant~2

momoz commented 8 months ago

I have seen similar behavior. I ask what lights are on and sometimes it doesn't show all the lights and sometimes it does. I've also seen it tell the wrong temperature and then replied that it's not that temperature and it says.. oh my bad, it's actually..." and then it's the right temperature. I wonder if it's in the prompts?

jekalmin commented 8 months ago

Thanks for reporting an issue!

@goofy1988

Could you try using tools in 1.0.2-beta1 version?

1

By using tools, messages sent to OpenAI have been a little bit more verbose, resulting in more tokens, but hoping to be more accurate when calling services.

@momoz

I also experienced this issue, but I haven't found out a way to improve this behavior. I tried tweaking "Top P" or "Temperature" in options, but it didn't really help much.

goofy1988 commented 8 months ago

It seems to work better. I figured out that the issue is only exisitng anymore when i dont close the dialog and do several comands off turning on an off of rhe same device in the same assist dialogue. When the dialogue is closed and a new request for turning on or off is made it seems to work now. That way it doesnt bother me so much anymore. Can you explain whats the difference now with the new "tools"? Can i create also my own tools for specific use cases, or should i use for that still the functions? Thanks.

jekalmin commented 8 months ago

Can you explain whats the difference now with the new "tools"?

Tools are supported from OpenAI, and function calling is deprecated and moved to tools.

Example

스크린샷 2024-01-27 오후 10 13 18

Message History

Functions

[
    {
        "role": "system",
        "content": "I want you to act as smart home manager of Home Assistant.\nI will provide ...",
    },
    {"role": "user", "content": "turn on livingroom and bedroom light"},
    {
        "role": "function",
        "name": "execute_services",
        "content": "[{'success': True}, {'success': True}]",
    },
    {
        "content": "Both living room and bedroom lights have been turned on.",
        "role": "assistant",
    }
]

Tools

[
    {
        "role": "system",
        "content": "I want you to act as smart home manager of Home Assistant.\nI will provide ...",
    },
    {"role": "user", "content": "turn on livingroom and bedroom light"},
    {
        "role": "assistant",
        "tool_calls": [
            {
                "id": "call_YImAIIgVyMCwiWw9fW7dC7fz",
                "function": {
                    "arguments": '{"list": [{"domain": "input_boolean", "service": "turn_on", "service_data": {"entity_id": "input_boolean.bedroom_light"}}]}',
                    "name": "execute_services",
                },
                "type": "function",
            },
            {
                "id": "call_wj9opVEXTiipYCXvKrtI1WgJ",
                "function": {
                    "arguments": '{"list": [{"domain": "input_boolean", "service": "turn_on", "service_data": {"entity_id": "input_boolean.livingroom_light"}}]}',
                    "name": "execute_services",
                },
                "type": "function",
            },
        ],
    },
    {
        "tool_call_id": "call_YImAIIgVyMCwiWw9fW7dC7fz",
        "role": "tool",
        "name": "execute_services",
        "content": "[{'success': True}]",
    },
    {
        "tool_call_id": "call_wj9opVEXTiipYCXvKrtI1WgJ",
        "role": "tool",
        "name": "execute_services",
        "content": "[{'success': True}]",
    },
    {
        "content": "Both the living room and bedroom lights have been turned on.",
        "role": "assistant",
    }
]

As you may see in the example, although result is the same, message history is different.

Using functions is more concise, but arguments of function are not stored in history. Hence LLM doesn't know what arguments of function was before. If you say "turn back off", it sometimes turns off only one light instead of two lights because LLM assumes again what lights to be turned off from message history.

Using tools is more verbose, but arguments of function are stored in history. Although it uses more tokens, it's clear that what LLM has done before because function call information is stored in message history. In this way, LLM should be smarter in searching what it has done before.

Can i create also my own tools for specific use cases, or should i use for that still the functions?

No, you should still use functions instead. Currently supported tools are function calling.

스크린샷 2024-01-27 오후 10 35 46
goofy1988 commented 8 months ago

Aah. Thank you very much for your detailed explanation. This way i understood.