Closed LoyVanBeek closed 8 years ago
To also engage the audience more, we could select a member of the audience to say the command. This would of course give some bonus points. Also great PR for the league.
@LoyVanBeek idea of an audience member giving commands has been considered in past years as "non-expert users" in pretty much that same line of thought (even to the point of testing natural interaction). However, if I remember correctly, the main issues were that:
1.- The audience member may end up being as fluent as a team member, providing unfair extra points. 2.- The audience selection process adds complexity to an already complex logistic. 3.- The randomness may end up providing non-uniformity to the grading to teams.
Having said all this, I do agree with the arguments of good PR (with good TC/team supervision) and the fact that audience will be engaged to a "boring" test. I welcome this discussion.
As for the original comment, I thought about it and my primary issue with it would be that every test would require ASR to be carried out, which, when considering past performances, may end up hindering teams (to the point of not getting to the parts of the test we're interested in benchmarking). However, I would agree to this idea only if we see a good ASR performance in the vast majority of the teams this year, such that ASR is considered an essential functionality for next year to the point of not even warranting its evaluation.
I already discussed my idea with some teams and they really want to keep in the first stage as benchmarking. To which I agree. We'll have to see the GPSR and speech recognition today. If it works well for the stage 2 teams, then we can do my idea in stage 2. Even letting a audience-member doe the continue rule with e.g. a QR-code can be a bit more engaging to the audience.
For the first stage, I would like to provide e.g. the navigation test waypoints on the fly (for flexibility purposes), which may be done by QR-codes or some other non-speech manner as well.
Why not have some sort of ROS-based communication via networking (we're already using it for smart home appliances) to present the command? Maybe from a tablet or handheld device that connects to the robot's ROS server.
My issue is with the QR-code. Seems to be a step back for communication.
I would say entering command via typing in a terminal would be a step back. But, if there's a nice GUI, we can show it on a screen for the audience to see. But that should only be backup.
Anyways, these are details, do you all agree that constructing a challenge from GPSR-commands is a good idea?
Then we can also measure how long a robot can work autonomously, without human commands/intervention. Different robots have different speeds, so we need to be clever with this.
Fr next year, I also want to standardize the continue rule. When we generate a command, we first read it our loud, if it then doesn't work I show the robot, on my laptop-screen, the QR-code that was also created.
For stage 1, this should be simple, because GPSR is all stage-1 abilities integrated. Stage-2 challenges can be defined as a spoken command to the robot. For example, the restaurant-challenge can be commanded to the robot as: