SemanticPriming / SPAML

Semantic Priming Across Many Languages (PSA Proposal)
MIT License
13 stars 7 forks source link

Psychonomics 2020: Power simulations for linguistic norm data collection #1

Closed doomlab closed 3 years ago

doomlab commented 4 years ago

Type: Talk

Title: Power simulations for linguistic norm data collection

Session types: Language: semantics, Language: psycholinguistics

Authors:

Abstract (1250 characters max):

The current focus on replication and reproducibility has driven a need to adequately power studies through appropriate sample size planning. However, estimating power and sample size is usually driven by the choice of hypothesis test and research design. An incredible number of psycholinguistic database norms have been published in the last ten years, and the issues of power and sample size have largely been ignored, as these studies do not use hypothesis testing as a main objective. This presentation will discuss how to use accuracy in parameter estimation (AIPE) and qualitative coverage approaches to determine the appropriate number of participants for data collection in a study with no hypothesis test. Data from English feature production norms, the English Lexicon Project, and participant ratings (i.e., valence, concreteness, etc.) will be used to demonstrate how to estimate variable sample sizes by item for both qualitative (feature production norms) and quantitative (priming, lexical decision tasks, judgment tasks) type data.

doomlab commented 4 years ago

Updated with KD and Nick as authors (Felix declined for this presentation).

JackEdTaylor commented 4 years ago

This sounds cool! I've never done this kind of power analysis so I have a few questions:

doomlab commented 4 years ago
MarMon83 commented 4 years ago

Hi Erin, thank you so much for putting this together! The abstract sounds really great! I was wondering if it could be possible to extend AIPE to rating norms. If I think affective ratings (like valence, arousal, dominance), I'm not sure it could be possible given the expected high variability in participants' judgments. But, what do you think about lexical-semantic measures, like concreteness, mode of acquisition, imageability and so forth, which seem more "objective" measures (so with less individual variability)?

doomlab commented 4 years ago

@MarMon83 - great question - definitely we can add it to the paper, and I'll add it to the presentation too. I'm unsure if one of those is more difficult than the other - mainly, I think the confidence interval or SE would be a larger for the more variable ones, but that's a good question we can test in the paper.