Closed alecive closed 9 years ago
Nice job @alecive Could you also provide a typical xml application in this PR, containing the new required modules as well?
Yes. I was expecting to do that when the pull request for the lookSkin
module will be ready. Shall I do it now or then?
The sooner the better :smirk:
After a bit of testing, I think this pull request is ready to be merged. As soon as I'll get to the blackie (or the bluey for what matters) I can do a more extensive set of tests, but tests on the reddie were successful.
We have tested successfully this branch along with @super-ste and @julijenv . Everything works flawlessly. The only problem is that the config.ini
file includes the ini file with the sentences in this way:
[include speech "speech_English.ini"]
This works until the speech_English.ini
file is in the same directory as the config.ini
file. If I have e.g. config.ini
into .local/share/...
, the resource finder will fail to find the speech_English.ini
file that might be located into the install folder. The module handles everything correctly, but I think that this is a bug into the resourceFinder class (for which I will open an issue).
This pull request does the following:
Speech integration
The module opens a port called
/demoRedBall/speech:o
that sends out a set of predefined sentences in order for an eventual TTS module to use them during the demo. This feature successfully works with the iSpeak module. The set of predefined sentences to be spoken is defined via .ini file. That is, now the configuration files for this demo have the following line:which links them to a specific
.ini
file to be used at configuration time (one for each language the user would like to use). This file lists all the predefinedvset of sentences to be spoken. The speech during the red ball demo is divided into three states:[speech_reach]
-> it is used when the robot "sees" the ball, and is trying to reach it[speech_grasp]
-> is performed after the grasping action (regardless of its success/failure)[speech_idle]
-> is used after the red ball is pulled away from the robot's sightFor each of these states, there is a group in the .ini file. Each line is a sentence that will be spoken by the robot according to its state. Add as many sentences (i.e. lines) as you'd like to these groups: they will be chosen randomly by the manager at runtime. The speech file should therefore look as follows:
LookSkin
moduleAlbeit not present in the repository yet, I would like to add this simple module to this repo in a future (or discuss about using similar modules already available). I made it in a couple of hours during the Bruxelles mission, so I would not call it production-ready for now. It simply listens to the
/skinManager/skin_events:o
port and looks at the point that has been touched if a contact occurs. Behavior at the hands is currently left empty because I would like to add some shaking hands behavior instead. With this pull request, thedemoRedBall
module should be able to cope with thelookSkin
module when available.NOTE
This branch has to be tested on the robot before merging. Please do not merge it yet.