Open JSegrave-IBM opened 4 years ago
Note: in order to work out the tech (algos / ML) behind the alerts, we're going to need to do some of this storyboarding.
e.g. (just one illustrative example) In the command center, an alert goes red for Alfonso. What happens next?
Labelling data is one of the big costs in machine learning and the 'type' of explainability required determines how we do the labels (as well as how feedback is gathered at runtime.) e.g. red / yellow /green is simple, but is it sufficiently-explained to enable correct follow-up actions? When context/explanation is essential, we often choose to use machine learning to learn the explanations (like 'critical exposure to NO2 over 15 mins') rather than the decisions ('red: get the firefighter out'). Need to know this up-front, as labelling 100s/1000s of examples is expensive is usually not cost-effective to repeat / fix.
This should be updated for our October 1 MVP.
Articulate Storyboards early on to enable firefighters, designers, software & hardware people review them before development and identify conflicts, issues and refinements (as well as have a shared sense of scope).
e.g. ’Firefighter with a watch and smartphone receives ‘status red’ alert’ - the storyboards can state things like:
Likewise ’Command Center leader receives ‘status red’ alert for a Firefighter’ - the storyboards can state things like:
Additional value from storyboards: