jenrichmond / same_page

1 stars 0 forks source link

inter-rater reliability #2

Open jenrichmond opened 3 years ago

jenrichmond commented 3 years ago

Looking for a way of demonstrating "hopefully" high reliability across checks. Can also look for change over time but ideally you want to show that reliability was high and constant over the coding period.

Maybe look into kappa function from psych package?

@ChristinaGitHub1

jenrichmond commented 3 years ago

HI @ChristinaGitHub1 i have played with the package a bit and think you need your data in this format

https://github.com/jenrichmond/christina_hons/blob/ee22699d123a8a3adc686586607fc989b319dbed/scripts/reliability/reliability%20notes.Rmd#L55

You want to filter out just the Check items and then run the scoring so for each rater for each paper you end up with a data score (and separately a materials score). If you give it the rating from each coder in a different column it will give you a table of kappa for all raters against all others and an average (which is Lights kappa)