uchicago-computation-workshop / abdullah_almaatouq

Repository for Abdullah Almaatouq's presentation at the CSS Workshop (4/11/2019)
0 stars 0 forks source link

Judging your peers in noisy situations #7

Open AlexanderTyan opened 5 years ago

AlexanderTyan commented 5 years ago

Thank you for your study.

One real-life difficulty with judging which of our peers are actually competent is that the signal about people's competence may be noisy. E.g. job interviews may not be predictive of how well a person may do the job, test results may be measuring ability to take tests instead of being competent in what the test is trying to measure; and there is often incentive to appear more competent for promotion and other purposes. In your study, the signal about others' competence was pretty clean, one could say idealized. To what extent would you say noisy or even distorted signals from your peers regarding their competence would influence the performances of the non-solo groups in your study? If the signal is very noisy, would we expect performance differences between solo/static/dynamic to be smaller, for instance? Would introducing "noise" be a good extension to your MTurk experiment? What would it mean for real-world applications for your findings? Should we be trying to eliminate noise in team environments so that peer selection is more effective?