Closed squito1 closed 2 months ago
It certainly could, but that would be entirely up to how the voice agent is able to be configured. Bermuda gives you the area of a given person, and I'd think that would be the sum total of stuff required from Bermuda.
The voice agent would need to know who was talking, and then know which sensor will tell it where that person is. It could do that by looking at the right "Area" sensor, or it could use the "Area" attribute that is in the Bermuda device_tracker
sensor, which it could find by checking which one(s) are attached to the given user.
An "easier" way is probably for the voice assistant to have mics in each area, so that it would know which area the command was coming from - but that might not be suitable for all use-cases.
At any rate, it'd be something to address from the voice assistant end of things, as there isn't anything from Bermuda's end that it can do further, AFAIK.
The HA discussion board would probably be the best place to ask about if/how voice assist could be set up that way.
I have a promt for extended openai where I pass as much info as possible:
Name 1 Phone Area: {{ bermuda phone1 area state }} Name 1 Last seen on camera: {{ doubleshot name1 state }} Camera 1 Area: {{ bermuda camera1 area state }} Name 2 Phone Area: {{ bermuda phone2 area state }}
etc
works pretty well :)
Probably wrong place to post this so please direct me for future posts. Could this allow voice assistants to know which room you are communicating from? For example instead of saying “xxxx turn the kitchen light on” you could just say “xxx turn the light on”?