Open aravindsagar opened 6 years ago
I've looked into two advisable API's: Microsoft Azure Emotion API and Google Vision API.
For our need, Emotion API is more fitting while Vision API can be used as well. The problem with Emotion API is that the image that will be examined by the cloud needs to be uploaded to somewhere (Azure or other accessible cloud), which means that on each user's selfie take, it will have to be uploaded which is not what we want. Alternatively, it could be stored and be uploaded when wifi access is granted, but this approach is questionable.
Vision API, on the other hand, will use byte stream of the image be sent, obviously, larger data/image means longer time taken to transfer/upload.
I've pushed a branch that has basic usage of the two API's (the dependencies for gradle is kinda messy). Also will see if there's a way to do this locally, but since emotion detection is not a light computation, it is unlikely that there will be an available library that can be used on local device.
It'll be nice if selfie capture can also be encapsulated in this.