A human walks into a room and spot a couple of robots doing something. There is no way to interact with the robots directly, and getting too close might be dangerous.
Can the human use its mobile phone to identify the robots and their actions via AR?
Prerequisites
Team "Case 9 AR" is ready
The team must have gained sufficient mastery of technology by completing Epic-1 and (possible) Epic-2. Meaning we will build upon the experience of the previous Epics in order to tackle this.
This level of readiness can always be discussed as we go as it is a weird mix of technical experience, personal confidence and time available.
Team "Case 10 IoT" is ready
This epic is intended as a cooperation with Virtual summer internship 2020/Case 10 Ref platform IoT devices where the Case 10 team will
[ ] Configure IoT devices that will act as "robots"
[ ] Configure communication between the robots to simulate actions and/or interactions
[ ] Provide an REST api where one can get information about the robots and their actions or interactions
MVP
Use a mobile phone or laptop as device
Use a web app to drive AR experience
Identify real world robots in AR by querying IoT hub api
Provide extended information about robot and its action in AR
Document the pros and cons of this approach, as well as any other related experience made, as a simple markdown file in this repo
Stretch goals if MVP works
Can we enhance the scene with points of interests?
If the robot has a planned route then draw the route in AR. Or if the robot as an objective then illustrate the objective in AR
Can we enhance the AR experience with audio?
(Note: html5 audio api can be a real ¤#"!¤"% to work with at times, tears of bravery is ok)
Let each point of interest have it's own sound snippet
Provide a simple interaction for how and when the sound should play
Example: Use gaze controls to "face-click" the POI
Audio snippet "KILLALLHUMANS!" should only be available as an easter egg.
Can the user interact with the robots?
Example: the user can stop and start a specific robot. AR should clearly show that the robot has been stopped.
Unfortunately team "Case 10" will not be ready in time for us to use their api, so we will move on to a new epic where we look at "multiplayer" scenarios.
A human walks into a room and spot a couple of robots doing something. There is no way to interact with the robots directly, and getting too close might be dangerous.
Can the human use its mobile phone to identify the robots and their actions via AR?
Prerequisites
Team "Case 9 AR" is ready
The team must have gained sufficient mastery of technology by completing Epic-1 and (possible) Epic-2. Meaning we will build upon the experience of the previous Epics in order to tackle this.
This level of readiness can always be discussed as we go as it is a weird mix of technical experience, personal confidence and time available.
Team "Case 10 IoT" is ready
This epic is intended as a cooperation with
Virtual summer internship 2020/Case 10 Ref platform IoT devices
where the Case 10 team willMVP
Stretch goals if MVP works
Can we enhance the scene with points of interests?
If the robot has a planned route then draw the route in AR. Or if the robot as an objective then illustrate the objective in AR
Can we enhance the AR experience with audio? (Note: html5 audio api can be a real
¤#"!¤"%
to work with at times, tears of bravery is ok)Audio snippet "KILLALLHUMANS!" should only be available as an easter egg.
Can the user interact with the robots? Example: the user can stop and start a specific robot. AR should clearly show that the robot has been stopped.