Setup initial app with maps and android video capture and processing of a canned "dummy" model. Actually, has 2 buttons in interface that launch 2 different kinds of activities 👍
ClassifierActivity that calls a model that performs identification only displaying results
DetectActivity that calls a model that performs identification+localizaiton (detection/localization) displaying the results
NEEDS in future, this is just a starting point
Interface to be reworked,
location traction (not just initial location) to be added to map display,
addition of trigger/buttons for each of the classification modules (Mask, Fever, Crowd, SocDist)
cloud Firestore capabilities for storage (need to determine HOW this is triggered ---don't want to store continuous video sequence data --expensive--does user trigger storage or do we sample based on user movement when far enough away or time change
visualization update of map with Firestore data --live data feed
Setup initial app with maps and android video capture and processing of a canned "dummy" model. Actually, has 2 buttons in interface that launch 2 different kinds of activities 👍
ClassifierActivity that calls a model that performs identification only displaying results
DetectActivity that calls a model that performs identification+localizaiton (detection/localization) displaying the results
NEEDS in future, this is just a starting point
Interface to be reworked,
location traction (not just initial location) to be added to map display,
addition of trigger/buttons for each of the classification modules (Mask, Fever, Crowd, SocDist)
cloud Firestore capabilities for storage (need to determine HOW this is triggered ---don't want to store continuous video sequence data --expensive--does user trigger storage or do we sample based on user movement when far enough away or time change
visualization update of map with Firestore data --live data feed