Closed mariobehling closed 5 years ago
+1 this would be HUGE enabler i.e. for makers who want to do their own home automation!
@mariobehling @Orbiter Can we enable something like view/change the status of Active Connected Devices like switches, television directly from the Android App or any other Susi Frontend if same account is logged in on both the places i.e Android App and Susi Home running on Hardware ?
we can, but
The ability to do home automation comes mainly from the following changes:
@Orbiter I understand what you said and my proposal keeps in mind every point you mentioned. I just wish to ask about the probable implementation for communication between Susi at home and any other frontend provided both are logged in to same account. Can susi_server store key value pairs for a person ? That can be used to store current state of devices.
And as far as changing Android Client , only part required on that side if needed is Showing Switches, Checkboxes etc for home devices like Google Home shows. Text based implementations won't look that good. That's just a minor change.
for each account we can store accounting information ("what has the user done") and AI log information ("what does the AI know about the user"). The AI information can actually be set using a conversation: "set a to frankfurt" "get a" That sets an attribute named "a" and reads the content.
Another question would be if we want that the susi_server is maybe running locally on the device. That would make it possible that we don't need accounts at all. It must be discussed.
@Orbiter That would be fine, we can define states of user's devices by adding a skill like "Turn the television in the living room on", this can store data in a JSON format like
{ uid="<user_id"> , location="living room", status="on"}
This would also enable to fetch results on query by same user on a different on a device say Android Phone to view status of his television, which can be asked in form of a query like "Did I turn off my television in living room?" , this would enable susi_server to process information and and return status as "No, It is on currently" or "Yes, it is off".
If a user says that "I wish to turn television off", it should also trigger a hook / callback to the hardware device enabling us to change the status , but as far as I know, that can't be done for now, i.e. capability is not there.
As far as susi_server on same device is concerned , this seems to be a good option. Mycroft (https://mycroft.ai) is already doing so, only limitation to this will be inability to enable cross device operation, if that is not required, it is the best option. One more point to consider is the fact that Raspberry Pi and similar device suffer from processing power, that can prove to be a bottleneck.
I wrote my GSoC proposal using the standalone susi_server approach running on another device either on intranet or internet and communication between the susi_server and susi on hardware via api calls. That is what I propose should be done in the initial stages. It can be expanded to directly running on Raspberry Pi at later stages.
I am sharing that with you please suggest probable ideas and changes. :smile:
Also something like Jasper can help complete the stack. Jasper is an open source platform for developing always-on, voice-controlled applications: https://jasperproject.github.io/
@mariobehling @Orbiter I ran the PocketSphinx on Raspberry Pi for offline detection, I found that it is good enough for detection of only a few set of keywords or phrases. For whole speech synthesis it gives very inaccurate result of Raspberry Pi. Thus, I am changing my approach to the following if you both agree:
Does it seem fine? Please share views and guide
Is this issue still open?? Has there any advances been made on this issue?
@Donyme you can checkout - https://github.com/fossasia/susi_linux
I found this very interesting.
I went through the susi_linux
and found the Roadmap and it says "Offline Voice Detection (if possible with satisfactory results)"
I can do that, pretty easily. Can I ? or I should leave it for GsOc'ers? @anshumanv @mariobehling
Here, offline voice detection , since we aint be able to use google speech api or ibm watson api, do we need to build the STT from scratch or can we integrate an API? @mariobehling @anshumanv
Amazon Echo is a device that only offers a voice command interface. How can SUSI AI offer the same interface? What options are there? Amazon has demonstrated how developers can connect to Amazon Echo using a Raspi.
What are options to make SUSI AI run on hardware? What software framework is the most useful?
Please propose and implement a solution that allows hardware to connect with SUSI AI and offer similar services as an Open Source solution.
LINKS: