Open jazzmonger opened 9 months ago
I want this same feature, to forward all requests that don't match a sentence to GPT for generative responces
I.e. Hey Hal, turn on the lights > matches a assist sentence and turns on the lights Hey Hal, Open the Pod bay doors, please > forwarded to GPT api > response: “I’m sorry, Dave, I’m afraid I can’t do that.”
I want this same feature, to forward all requests that don't match a sentence to GPT for generative responces
I.e. Hey Hal, turn on the lights > matches a assist sentence and turns on the lights Hey Hal, Open the Pod bay doors, please > forwarded to GPT api > response: “I’m sorry, Dave, I’m afraid I can’t do that.”
Turns out this is called "pipeline chaining". The HA devs are looking into it, and it's a heavily requested feature that has all kinds of uses.
jeff
I want this same feature, to forward all requests that don't match a sentence to GPT for generative responces
I.e. Hey Hal, turn on the lights > matches a assist sentence and turns on the lights Hey Hal, Open the Pod bay doors, please > forwarded to GPT api > response: “I’m sorry, Dave, I’m afraid I can’t do that.”
Actually I'm already doing this. if no sentence match I forward it to the "HOME_ASSISTANT_AGENT". It should be a small change to allow this to be altered in the config flow, if I have the time this weekend I will try to implement this.
Actually I'm already doing this. if no sentence match I forward it to the "HOME_ASSISTANT_AGENT".
HOW??? I didn't know it was possible to detect "I don't understand that" from Assist... Oh wait.... I see you're the author of this!
So you're saying you could also intercept a failure of the Assist pipeline some how and then let us forward the same sentence on to say Alexa by calling a service?
for context, this is how I manually send sentences to Alexa for processing instead of speaking them.
- service: media_player.play_media
target:
entity_id: media_player.kitchen_original
data:
media_content_type: custom
media_content_id: 'play ac dc on Apple Music'
So when Assist fails to understand anything, I'd want to pass it on to Alexa (or ChatGPT, or whatever):
- service: media_player.play_media
target:
entity_id: media_player.kitchen_original
data:
media_content_type: custom
media_content_id: "{sensor.sentence}"
Actually I'm already doing this. if no sentence match I forward it to the "HOME_ASSISTANT_AGENT". It should be a small change to allow this to be altered in the config flow, if I have the time this weekend I will try to implement this.
This would be great. With your code I have connected Google Assistant, Google Gemini, OpenAI GPT, and even Alexa (send only) to a single assistant. I would love to be able to select one of those as the fallback instead of the HA agent with its relatively few defined intents. If I get it working in your code I'll submit a PR. Thanks for publishing this.
I gave up on HA pipeline. Too many problems, no way to recover from. I'm now using Willow with local inference server & an ESP box3 with pipeline chaining, and it works really, really well.
Any commands not understood by ha are forwarded on to Alexa for processing.
Any way to intercept responses and trigger based on what they are????
Im trying to send commands that are not understood by Assist to Alexa... Basicially I want a trigger in an automation anytime Assist spits out "Sorry I couldn't understand that" and then send it on to Alexa. The use cases are too numerous to list!
https://github.com/toverainc/willow/discussions/339
No only would this be useful as I've outlined, I could see taking another action based on whatever the response from Assist is.