Closed ben-domingue closed 2 weeks ago
library(readxl)
y <- read_excel("QPart1Data_objects.xlsx")
x=y[, 19:190 ]
colnames(x)<-x[1,]
x<-x[-1,]
x=x[,-grep("Text", colnames(x)) ]
x=x[, -grep("quality", colnames(x))]
n=dim(x)[2]/8
for (i in 1:n) L[[i]]<-x[,((i-1)*8+1):(i*8)]
df=data.frame()
for (i in 1:n){
l=L[[i]][,-8]
id=L[[i]][1,8]
m=list()
for (j in 1:ncol(l)) m[[j]]=data.frame(id=id, item=colnames(l)[j], resp=x[,j])
for (i in 1:ncol(l)) colnames(m[[i]])<-c("id", "item", "resp")
d<-data.frame(do.call("rbind",m))
df=rbind(df, d)
}
save(df,file='Part1.Rdata')
You wrote "NOTE: in this study, the images are the 'id', the items are the below qustions, and the humans are raters (who hopefully have IDs)."
Is this how you expected it to be? ( I am not sure what we mean rater and rater id).
looks close. is there information available about the rater? like, do they have an id for who makes those judgements?
one question here: why are there 8 files?
So, there were eight versions of the questionnaire. Approximately 50 respondents per version.
as discussed, just jam these together :)
Here results in csv Object.stimulus.set.csv
note for ben: see email to authors "Question about "ObjAct stimulus-set"
based on additional analysis of that data i don't think we can get it into the proper format (at least without relatively large amounts of back/forth with the authors)
https://link.springer.com/article/10.3758/s13428-021-01540-6#additional-information
The experiment was preregistered on OSF, including the methods, sample size, and analysis plans. The preregistration can be found at: https://osf.io/htnqd/.
NOTE: in this study, the images are the 'id', the items are the below qustions, and the humans are raters (who hopefully have IDs).
the items: The questions were as follows: (a) “How weird is the image?” (b) “How likely is it to see such a scene in the real world?” (c) “How visually complicated is the image?” (d) “How hard is it to identify the object?” (here, the word “object” was replaced with the name of the specific object that appeared in the presented scene; for example, “How hard is it to identify the bottle?”). (e) “How disturbing is the image?” (f) “How much do you like the image?” (g) “How arousing do you find the image?” (h) “Which object is MOST likely to appear in the scene?” Questions (a)–(g) were answered using a seven-point Likert scale of 1 (not at all) to 7 (very much), and question (h) was a multiple-choice question, with the following options: the congruent object (e.g., bottle); the incongruent object (e.g., flashlight); another object that is semantically related to the congruent object (e.g., glass); another object that is semantically related to the incongruent object (e.g., candle); and “other” (with an option to fill in the answer).