RoboCupAtHome / RuleBook

Rulebook for RoboCup @Home 2024
https://robocupathome.github.io/RuleBook/
Other
149 stars 60 forks source link

Stricter and clearer procedure for giving commands #186

Closed LoyVanBeek closed 7 years ago

LoyVanBeek commented 8 years ago

Understanding a command now consists of 2 parts: speech recognition and language understanding. If the former doesn't work, you do not get the chance to to the latter, which often happens in RoboCup@Home due to the noisy venue. It can work however, but can also be quite frustrating for the team and the referee (speaking from experience here).

Splitting these points up would help a lot of teams. Also, in the indended application (care) some potential users of robots may have a peech impediment. Standard speech might not work for those people and thus an alternative could be useful.

  1. Spoken command by referee, grants 100% of points of getting command
  2. Team member by voice: grants 90% of points
  3. Fancy GUI via which the command can be entered. No typing, should be e.g. touch screen on robot. Grants 50% of points, since there is no speech recognition involved.
  4. QR-code fallback: 25% or less of points. This is a big hassle to create and unnatural to do. You must provide an alternative.

This will always be the procedure, no exceptions.

swayshuai commented 8 years ago

How can referees know that the robot understand the language? Can we ask the robot to describe the actions it plans to show that it really understand the language?

For GUI, it's natural to use apps in iPad to send the task.
There are many ways:

LoyVanBeek commented 8 years ago

There are several steps to 'getting a command':

  1. recognizing the speech into text (which can be proved by repeating or printing the text on a screen, as ToBi and Homer do)
  2. Understand the text ('My goal is to bring a drink to Wei')
  3. Planning: 'To bring a drink, I need to find it. To find a drink, I need to visit all the locations it could be at' etc.

The QR codes and typing are a way around 1. Having a GUI with icons to indicate your command can help with doing step 2, up to some level I think.

Splitting the score for getting a command according to these steps would be good.

balkce commented 8 years ago

Agree with Loy.

kyordhel commented 7 years ago

I don't think teams should score for performing ASR since what we want is robots solving the task, so if the robot can't understand what has been told, is irrelevant if the command was recognized.

Exception is the Person & Speech recognition, but this has been solved already in Pull Request #266 Since Continue Rule already provides a workaround for the whole command input and GUI (now allowed) are by far more friendly than QR codes, we can consider this issue as solved.