We desperately need to increase Project Sidewalk's visibility/exposure and to get better traction. LabInTheWild and Zooniverse are two academic sites that have done extremely well in crowdsourcing studies. We should: study them and/or collaborate with them. :)
Along these lines, I have a very old TODO on my list that says "Explore how/if we can list Project Sidewalk on Professor Katharina Reinecke's LabInTheWild page"
I spoke with Katharina directly about this during my UW visit in April. She was supportive; however, LabInTheWild's key overarching philosophy is that all online experiments should provide meaningful feedback to participants (see: http://www.labinthewild.org/researchers.php). That is, we would have to improve our feedback to users (which we need to do anyway) to help tell them, for example, how well they are doing with labeling tasks and, perhaps, to compare them to others.
In order to do this, we would need to do some golden insertion routes where we know ground truth and can then evaluate performance in real-time. Or brainstorm some other ways to give interesting, relevant feedback to users. (Honestly, our site feels like a better match for Zooniverse than it does LabInTheWild... because the latter is typically online experiments that evaluates the user, which we are not doing)
We desperately need to increase Project Sidewalk's visibility/exposure and to get better traction. LabInTheWild and Zooniverse are two academic sites that have done extremely well in crowdsourcing studies. We should: study them and/or collaborate with them. :)
Along these lines, I have a very old TODO on my list that says "Explore how/if we can list Project Sidewalk on Professor Katharina Reinecke's LabInTheWild page"
I spoke with Katharina directly about this during my UW visit in April. She was supportive; however, LabInTheWild's key overarching philosophy is that all online experiments should provide meaningful feedback to participants (see: http://www.labinthewild.org/researchers.php). That is, we would have to improve our feedback to users (which we need to do anyway) to help tell them, for example, how well they are doing with labeling tasks and, perhaps, to compare them to others.
In order to do this, we would need to do some golden insertion routes where we know ground truth and can then evaluate performance in real-time. Or brainstorm some other ways to give interesting, relevant feedback to users. (Honestly, our site feels like a better match for Zooniverse than it does LabInTheWild... because the latter is typically online experiments that evaluates the user, which we are not doing)