Closed sheridat closed 7 years ago
Thanks for the feedback! There is a new API, which is located on the voicekit branch of this repo, and is also included in the 2017-09-11 image. This software is compatible with the original Voice Kit hardware.
There's no process for migration between the two versions - you're welcome to stick with the old "actions.py" approach, or to rewrite your commands based on the demos of the new API: https://github.com/google/aiyprojects-raspbian/tree/voicekit/src
The AIY Essentials Guide has not yet been published, so for documentation for the new software you should refer to https://aiyprojects.withgoogle.com/
Well I've set up my AIY box with the new software and then turned to the documentation you point out above. In the old software src/action.py detailed half a dozen examples of clever stuff.
To be frank the assistant_library_demo completely fails to give any clue regarding how to implement even a simple action such as for example saying the IP address.
Is it possible to implement the sort of actions that src/action.py contained using this new software approach using the google assistant library?
@drigz Just to be sure, this new API is the direction chosen to further continue this project? So best will be I/we make a switch to the voicekit branch instead of using master. Just started with this as a project for me and my son to explore the possibilities of AIY and controlling home devices.
@divx118 I would suggest that you hold off changing from master to voicekit. I currently use the master branch to control all sorts of devices and services using the google assistant (rather than the cloud api) and it is not clear at all from the documentation whether the voicekit supports this. I am busy reverting to the master branch.
I've contacted raspberry.org regarding the situation that the AIY essentials guide is not available to provide guidance to use this software (which is the only AIY image available) . The demos do not provide the guidance needed by non experts. They have advised that the documentation will be available soon.
@sheridat documentation here is enough for me to get started for the rest I can dig through the source. No problem. Just wanted to be sure this will be the future direction of this project and for what I have been reading the answer will be yes. Thanks anyway for your suggestion, but I am going to switch to the voicekit branch.
@divx118 If you manage to get it to do anything based upon the google_assistant_demo.py example please consider posting it to the raspberry.org AIY forum which is located at [(https://www.raspberrypi.org/forums/viewforum.php?f=114)]
Edit - I did find this post which seems encouraging (https://medium.com/@aallan/a-retro-rotary-phone-powered-by-aiy-projects-and-the-raspberry-pi-e516b3ff1528)
@sheridat I'm with you. I was getting the hang of the previous API and had many customised actions. It was pretty straightforward for a novice like me to modify, but this new 'vastly improved' API is completely different and the samples are not intuitive at all. I hope the AIY essentials book is good, as for now, I can't do anything further with this project. Communication on the changes has also been disappointingly poor...
The new API makes use of cloud speech for actions. The free usage is limited to 60 minutes per month beyond which it becomes chargeable. 60 minutes per day at-least would have made more sense. https://aiyprojects.withgoogle.com/voice#makers-guide-3-1--change-to-the-cloud-speech-api I am still wondering as to who are the target audience for voicekit, as 60 minutes per month is a joke.
@shivasiddharth You can use the gRPC demo to access google assistant and being able to create your own actions. You are not limited to the cloud speech. However if everything goes well there should be better documentation coming soon.
@divx118 , So what is the difference between SDK, Old AIY and this new AIY on API levels or in terms of API usage. Its getting really unclear as to what is making use of what. If someone can kindly throw light on the differences, it would be great.
Any ideas how to shut the unit down now with the updated image. Raspberry power off no longer works
Well there are a couple of ways
@shavanni , i use my push button shutdown service for turning off headless units. https://github.com/shivasiddharth/pi-shut
After a lot of trial and a bit of error, I've migrated most of my code over to the new structure.
While the new code structure does make it a lot easier to define commands, it risks the creation of a monolithic beast of a script for the 'assistant' script.
My plan is to keep the different type of commands in separate 'modules' to be imported as-needed. This meant that there was a bit of duplication between commands that had similar features (such as pressing the GPIO button to stop the current action).
The latest version of the code can be found here: https://cloud.pembo.it/index.php/s/JSsf5i4Mv7FWYcP
I enhanced the shutdown code to play a warning audio file and give the user a chance to press the GPIO to cancel the shutdown. My Kodi-related commands are working fine. Other features, such as MPD control are still a work-in-progress.
I also created a systemd service for the python script that runs my assistant.
@sheridat, it would be good to add an example with a response to the user such as "IP address" or "repeat after me". I'd happily accept a pull request for this.
@divx118, yes, voicekit
is the direction. I'll be changing the default
branch and adding messages to the README to explain the difference between
the branches.
@shivasiddarth, thanks for posting the shutdown service! The new and old projects both use the same APIs (Cloud Speech and Assistant). The change is that before you configured this with the the configuration file or on the command line, whereas now there are different examples for the different APIs. Before, we tried to abstract both behind the same interface, but this was confusing and led to issues like #79.
@drigz Would you be able to clarify whether the new "voicekit" branch (or any other developments with this project) will limit Google Assistant being free to use on the Pi? Seeing this section about billing in the AIY projects documentation makes me wonder how that applies to this project.
P.S. I think an example to respond to "IP address" would be awesome.
@t1m0thyj Regarding the billing aspect - the same situation existed with the "master" branch in that if you wanted to use Google Assistant it didn't cost you anything whereas if you wanted to use Cloudspeech you had to sign up to billing and hence paying after using your free 60 minutes per month. The old master branch documentation has the same info regarding billing.
What I am trying to say is that the status quo regarding billing has not changed with the intro of the voicekit branch. You will see why some of us were very keen for info on using the Google Assistant with the voicekit branch.
@sheridat Thanks, that makes sense regarding the billing. I'm still somewhat confused about exactly when the Cloud Speech API is used. Is it used whenever you code a custom action for Google Assistant?
@t1m0thyj I've not used the cloudspeech facility so I can't really say what it does.
As far as the Google Assistant goes you can code custom actions using it. As an example using the assistant_library_with_local_commands_demo.py (if you haven't got that file in your src folder you need to pull it from the repo - see how to do that at the bottom)
That file already has two custom actions added
def power_off_pi():
aiy.audio.say('Good bye!')
subprocess.call('sudo shutdown now', shell=True)
def reboot_pi():
aiy.audio.say('See you in a bit!')
subprocess.call('sudo reboot', shell=True)
I've added some of my own The three custom actions below are specific to me. The first one calls a service on a Pi on which I run Home Assistant - it ask that PI for the temperature in my garden. The second and third examples switch my television on and off by using some code called LIRC that sends IR remote commands.
def garden_temp():
#aiy.audio.say(response)
headers = {'Content-Type': 'application/json',}
response = requests.get('http://192.168.0.16:8123/api/states/sensor.garden_temp',
headers=headers)
data = response.json()
print(data)
temp = data ['state']
output = 'Garden Temperature is ' + str(temp) + 'centigrade'
aiy.audio.say(output)
def ir_tvon():
cmd = "irsend send_once /home/pi/vieratv.conf KEY_POWER_ON"
p = os.popen(cmd)
def ir_tvoff():
cmd = "irsend send_once /home/pi/vieratv.conf KEY_POWER_OFF"
p = os.popen(cmd)
As you can see you can do just about anything!!!
To activate those commands you have to alter the program to call the above. the "if text == 'power off' and elif text == 'reboot' call the custom actions already in the file. I've added triggers from my custom actions
Look at this fragment. The first two are the power off and reboot custom action "triggers". After them I have added my own voice triggers
elif event.type == EventType.ON_RECOGNIZING_SPEECH_FINISHED and event.args:
text = event.args['text']
print('You said:', text)
if text == 'power off':
assistant.stop_conversation()
power_off_pi()
elif text == 'reboot':
assistant.stop_conversation()
reboot_pi()
elif text == 'outside temperature':
assistant.stop_conversation()
garden_temp()
elif text == 'TV on':
assistant.stop_conversation()
ir_tvon()
elif text == 'TV off':
assistant.stop_conversation()
ir_tvoff()
If you need to pull the new demo file do the following
@sheridat Thank you for your detailed response. I tried updating to the new voicekit branch, but reverted to the old master branch because the new one doesn't have the voice-recognizer system service AFAICT.
The fact that the aiy.cloudspeech module uses billing doesn't seem to be much of a limitation, since all the other aiy.audio modules are free to use.
@t1m0thyj
Have a look at the code I linked to above. The voicekit code appears to have consolidated the LED monitoring and other processes into a set of subprocesses of the main service. The single service is run by launching your custom python script in the background. I have included a copy of my systemd settings file. You will need to change the contents of the systemd file to match your environment. Since I am not using the official raspian-based image, my files are not in the same location.
I can confirm that none of my code relies on the cloudspeech service.
Looks like this discussion as resolved the original questions. If people have further questions then they can be posted on the RPi forums or here as appropriate.
Hi, I was puzzled by PR137 (new version of shutdown) which kinda implied that you would know who Billy Rutledge was and perhaps the PR's author, and I was equally puzzled by the reference to some AIY Essentials book..
Well today this months version of the raspberry.org fanzine "Magpi" is out - you can download it for free (they are lovely people) [https://www.raspberrypi.org/magpi-issues/MagPi62.pdf]
Have a look at page 10 and you will note that Lucy's role is revealed but a far more interesting snippet is
The new kits are easier to assemble than the initial freebies,with “a vastly improved API that makes it a lot easier to program”according to our own Editor,Lucy Hattersley.
No wonder the code in PR 137 looks strange.
So please advise;
I've probably added 500 lines to my action.py this week - I am concerned that I should hold off before adding another 500