jekalmin / extended_openai_conversation

Home Assistant custom component of conversation agent. It uses OpenAI to control your devices.
964 stars 136 forks source link

Any way to get regular response as well as actions? #29

Closed Someguitarist closed 1 year ago

Someguitarist commented 1 year ago

First, thanks, this plug-in is working great. Two questions;

1) Is there any way to still get regular questions answered? If I ask it to turn off the lights it will work, but if I ask like 'Who is Mario?' the plug-in's AI just repeats back 'Who is Mario'. If I ask the same question through the regular API without the plugin, I get a detailed answer about Mario. It's not the end of the world, but it's one of the nice things about having AI at your fingertips is being able to ask anything!

2) When I ask it to do something it can do, like turn off the lights, it will turn off the lights but give a response like "To turn off the laundry room lights, you can use the "light.laundry_lights" entity and issue the "off" command." even though it turned off the lights. Is there any way to just have it say something like 'Thanks, turned the lights off'? Doing this through the Assist pipeline takes a while to read out that response. Thanks!

jekalmin commented 1 year ago
  1. I just tried this with two models, and the result is following.

    gpt-3.5-turbo-1106 gpt-4-1106-preview
    스크린샷 2023-11-18 오전 12 54 04 스크린샷 2023-11-18 오전 12 52 09

    For example, in order for gpt-3.5-turbo-1106 model to answer general questions, you can modify prompt by adding a sentence like below:

    You are not only limited to answer about smart home, but also general knowledge.

    If your model repeats what you asked, tell model not to repeat using a prompt. The default prompt is made like this:

    1. change prompt
    2. ask question
    3. check behavior
    4. repeat i-iii

    Although it is not ideal, it works in general. Better prompt should be contributed here since the quality depends not only on model but also on prompt.

  2. This can also be fixed by changing the prompt I guess. You can try removing the last two sentences of the default prompt. These two sentences worked when using gpt-3.5-turbo before, but it seems not effective in recent models.

    I put Do not execute service without user's confirmation sentence because I wanted the model to ask me again before taking into action.

    I put Do not restate or appreciate what user says, rather make a quick inquiry. sentence for case like following:

    user: "I'm done using restroom" assistant: "Do you want to turn off the light of restroom?" user: "Yes" assistant: "Turned the light off"

    Tweak with prompt and please share it by contributing to example for better use for everyone.

Someguitarist commented 1 year ago

Hmmm, so I think my issue might be related to running out of memory, or something similar. I'm running LocalAI and if I remove the template prompts entirely I get a response, but also if I increase the context size to something like ~4-5k it will actually answer the question once, and on asking the same question a second time it will just repeat the question back, at least on a 1660Ti.

You can close this issue, as I don't think it's related to your plug in at all. I think it's a setting somewhere in LocalAI to increase either VRAM or RAM allocations. If I can get it working I'll post back with what I've changed!

jekalmin commented 1 year ago

Oh, I have not tested with LocalAI yet :( Hope to find a way to resolve it! Thanks :D