Issue #3 proposes binary action-level feedback with either correct/wrong mole shooting.
In this issue the aim is to take the feedback one step further by introducing levels of feedback fidelity.
The idea is to calculate a number of metrics when users shoot targets and use these metrics to augment the positive feedback (shooting a correct mole). For example:
action duration (the faster user reaction, the better feedback)
speed (the higher speed, the better feedback)
trajectory straightness (the more idealized trajectory, the better feedback)
The feedback fidelity can be implemented via thresholds (fx higher speed than 2m/s = extra positive feedback) or as a direct mapping (higher speed = more particles). As a starting point, I would like us to focus on direct mappings (fx the higher speed = more particles example).
There are several possible objects we can consider using to send this feedback, including: 1) the visual feedback from the mole, 2) the audio of shooting a mole, 3) the visual of the wall and/or the environment (background, floor). As a starting point, we will focus on the mole.
It's going to be important to think carefully about how we perform this mapping - we don't want to overwhelm. But theoretically we can use it to incentivize and reward good behavior and we may want to empirically confirm or compare this type of incentive to other feedback types (e.g. operation/task based feedback).
@lucasmnt I hope this gave you some background. As a starting point, you can start by using users reaction time, to determine the power of a scaling effect on moles (the lower reaction time, the more pronounced a scaling effect). See animation reference below:
We can additionally add brief highlight during the 200-300ms later moments of the interaction, which could be dependent on the same metric.
Issue #3 proposes binary action-level feedback with either correct/wrong mole shooting. In this issue the aim is to take the feedback one step further by introducing levels of feedback fidelity.
The idea is to calculate a number of metrics when users shoot targets and use these metrics to augment the positive feedback (shooting a correct mole). For example:
The feedback fidelity can be implemented via thresholds (fx higher speed than 2m/s = extra positive feedback) or as a direct mapping (higher speed = more particles). As a starting point, I would like us to focus on direct mappings (fx the higher speed = more particles example).
There are several possible objects we can consider using to send this feedback, including: 1) the visual feedback from the mole, 2) the audio of shooting a mole, 3) the visual of the wall and/or the environment (background, floor). As a starting point, we will focus on the mole.
It's going to be important to think carefully about how we perform this mapping - we don't want to overwhelm. But theoretically we can use it to incentivize and reward good behavior and we may want to empirically confirm or compare this type of incentive to other feedback types (e.g. operation/task based feedback).
@lucasmnt I hope this gave you some background. As a starting point, you can start by using users reaction time, to determine the power of a scaling effect on moles (the lower reaction time, the more pronounced a scaling effect). See animation reference below:
We can additionally add brief highlight during the 200-300ms later moments of the interaction, which could be dependent on the same metric.