Now that we've got basic ASR (#1) and we'll soon have a command understanding baseline (#9), we're going to want to evaluate these as a whole pipeline:
If we speak commands generated from the GPSR generator to the robot, how well do we do at actually understanding them (that is, how often is the actual resulting semantic parse correct)?
This will lead in a few directions, but the one we should explore first is basic tuning of the microphone. We've forked a nice driver package that provides a way to change the hardware's parameters. We should integrate the driver package's tuning interface with the ASR node.
Now that we've got basic ASR (#1) and we'll soon have a command understanding baseline (#9), we're going to want to evaluate these as a whole pipeline:
This will lead in a few directions, but the one we should explore first is basic tuning of the microphone. We've forked a nice driver package that provides a way to change the hardware's parameters. We should integrate the driver package's tuning interface with the ASR node.