Open sofer opened 7 years ago
@sofer the algorithm is behaving as expected in this case. This is the result of the analysis on that picture:
if you look at the value 'scores' that's the result of the analysis. So every picture is rated for all emotions that the algorithm can recognise. Then the highest value is selected as "correct" answer. Its not uncommon for the algorithm to detect many emotions with a percentage below 50%. do you have any suggestion of how to deal with it?
@daymos That makes sense. One solution might be to not use photos below a certain threshold or certainty, say 60 or 70--or at least 50. I love how this one has no fewer than 4 emotions with a score of >20. That must be quite unusual.
That picture is a thoughie! @SimonJStewart sourced the pics for us, I think it wasn't easy to get high confidence form the algorithm in the end. But am not sure how frequent they are. We were discussing at some point to use less clear cut rated pictures like this to make the game harder at higher levels. We have done nothing so far in terms creating a logic to set the difficult, but it would be a cool feature I think. If this is an issue, I can remove the pictures that are so mixed. Although we don't have so many pictures in general..
@SimonJStewart I think it's your call.
I think we should leave in ambiguous results - we are working to the principle that there is no right / wrong answer, just divergence from the mean.
It does look odd on the screen though. Wondering if this could be fixed with a different UI:
Rather than display Emotion API was x% certain it was [top emotion] could we show something like: :) --- 20% :o --- 3% :| --- 37%
:| --- 0% :( --- 3%
So, if I thought the face was neutral and 37% of people thought it was sad, then aren't I in agreement with the majority of people? Something seems to have gone wrong with your algorithm in this case.