nnickels / MACS30200proj

Repository for Perspectives of Computational Research
0 stars 0 forks source link

Proposal feedback #1

Open bensoltoff opened 6 years ago

bensoltoff commented 6 years ago
nnickels commented 6 years ago

@bensoltoff Professor Soltoff: Can I get your opinion on an idea? I've been thinking a lot, since even before my proposal presentation, about how to deal with biases of students taking surveys at different times of the day in my digital survey project. I had an idea that I may want to pursue instead, that is still related to survey responses of the the Trait Anxiety Inventory in the UChicago community. I've always been very interested in how the UChicago community, which makes up my dissertation sample, differs from online digital survey samples, as some of our lab's projects have worked with MTurk communities only. I was going to take a sample of Trait Anxiety scores from a UChicago community (recruited via UChicago Markplace, listservs, and UChicago Facebook pages) and compare it to a sample of Trait Anxiety Scores collected via MTurk. I could compare general distributions of the scores between participant, but I could also included the post-survey questions regarding time of day, setting, and mood to compare between samples as well. This would answer the following RQs: 1) How do trait anxiety scores from a sample taken from the UChicago community differ from a sample drawn from an MTurk population, and 2) Does digital survey behavior differ between these two samples in terms of where, when, and the mood during which these digital surveys are taken? I feel like a comparison design, between-subjects, gives me more sound results, as despite what biases may exist that determine when survey participants complete and digital survey, since I'd be comparing between samples, I'd be focusing on the comparison between samples, not the construct validity of the measure. Does that make sense? Since my lab recruits from both "populations", it's something that is still crucial to my lab's work, and quite frankly, to digital survey research in general. Let me know what you think ... Thank you!

bensoltoff commented 6 years ago

I think that would allow you to answer part 1, not so sure about part 2. Recruiting strategies are far different between those two populations. MTurk I would bet would get a lot of participants as soon as the HIT is made available. It's just a larger overall pool of workers to draw from. So when you make the HIT available I think would have a greater influence over when workers complete the survey, rather than the UChicago population. If time of day influences an individual's mood, that measure would also be potentially biased by when the HIT opens.

bensoltoff commented 6 years ago

But now that I think of it, MTurk is a global population to draw from. Time of day when it opens in the US is different from say India. You might time the HIT to start in a way to maximize the variation in when individuals first have access to the HIT

bensoltoff commented 6 years ago

Or run multiple HITs that are identical except for the local time when it opens. Then geo-restrict the HIT to only be available in certain countries and start them at staggering times

bensoltoff commented 6 years ago

If you can find a way to get a wide range of potential times when workers would first see and complete the HIT, I think this design is useful