researchart / rose7re20

3 stars 1 forks source link

Reviews on submission 88 #13

Open fabianodalp opened 4 years ago

fabianodalp commented 4 years ago

The assigned reviewers are going to post their reviews on this submission within this issue. The same thread will be used also to support the interaction with the authors.

Reviewers, please check STATUS.md to determine which badges the artifact is applying for. A description of the badges can be found here: https://re20.org/index.php/artifacts/. You will also receive an e-mail with further instructions shortly.

neilernst commented 4 years ago

Submission: https://github.com/researchart/rose7re20/tree/master/submissions/88-girardi

neilernst commented 4 years ago

Badges applied for: Reusable, Available

The data is available at Figshare so I agree with Available. The Readme should list the DOI explicitly.

As for the Reusable badge:

jinghui-cheng commented 4 years ago

The authors are applying for the badges of Available and Reusable.

Available

The protocol, data, and code are available for public access. So I agree that this is okay.

Reusable

I agree with all the comments made by Neil. Particularly, there is no “install.R” file and I am not sure if your IRB or any equivalent allows you to release participants’ raw data (although anonymized) – please double-check.

I think the protocol also needs to be improved to allow/facilitate reusability, in the following aspects:

fabianodalp commented 4 years ago

Hello @jinghui-cheng and @neilernst , thank you for the thorough reviews. I have assigned the available badge. Regarding the various comments that you provide for "reusable", @danielagir can you please provide an answer to them?

danielagir commented 4 years ago

Thank you for your detailed feedback. We are addressing your comment and will update the package accordingly by the end of the week

danielagir commented 4 years ago

Dear @neilernst, and @jinghui-cheng ,

thanks a lot for your constructive and detailed comments, which we have addressed as follows:

@neilernst issues: We added the install.R in the Replication Package on Figshare and removed the STATUS.md and LICENSE.md files; By providing the instructions on how to run the script from the command-line we aim at making the results replicable also for those who are not familiar with RStudio, so that is why we use Rscripts; As regards the point related to the sharing of raw data, we are not making the dataset publicly available for replication. In the package, we only provide an example of how the input file needs to be formatted. The whole study has been approved by the Institutional Review Board of Kennesaw State University, study # 16-068, as also specified in the paper. We have added the information about the IRB approval in the readme; We moved all the instructions to run the scripts to the INSTALL.md file, to avoid confusion and redundancy with the content of README.md; After having run the install.R file, all the libraries should be uploaded correctly, but please let us know if you encounter any additional issue We also tested the scripts on some older MAC-OS machines, and we saw that we have some memory error that shuts down “Emotions.R”. This may be due to the operating system and its specific configuration, as other tests were successful. We have indicated this potential problem in the README, under “Known issues”. We have removed the overlapping of instructions between the README and the INSTALL file, thanks for the recommendation.

@jinghui-cheng issues:

Please check the answers to @neilernst for the issues about the install.R file and ethics aspects. We have added the information to the protocol as follows: We have specified in step Set-up 4 that we are using Empatica E4 as wristband We have specified in step Calibration - 13 that the interviewer should create a rapport with the participant, and visually check that the participant is at ease We have specified in step Calibration - 9 that the reader should refer to ElicitationImages.pdf, and we have specified the duration of image and the intervals In the different steps (Calibration 7-11; Interview 3-8) in which the expression “at three press the button...” was used, we have applied the following modification to clarify: “I will count until three; at three press the button...”. We have removed step Set-up 14 and referred Interview.pdf in step Set-up 12. We have added the questionnaires Self-Assessment-Questionnaire.pdf and Calibration-Questionnaire.pdf as attachments, and referred in Protocol.pdf their names, together with the DemographicSurvey.pdf, ElicitationImages.pdf and Interview.pdf. We preferred to do that instead of having two versions, in the shared package and as appendixes. We have also updated the README accordingly.

Remaining issues:

@jinghui-cheng issues. We did not understand the statement “how to use the data to calculate the device”, please let us know what you mean with that.

jinghui-cheng commented 4 years ago

Dear @danielagir. Thank you for the quick response. I will check the files shortly.

But for the statement, sorry -- there is a typo. I meant to say "how to use the data to calibrate the device." Thanks!

danielagir commented 4 years ago

@jinghui-cheng in the following we describe how we use the data for calibration.

In line with previous research, we run a preliminary step for device calibration and emotion elicitation. Specifically, we use the collected data during the emotion elicitation step to adjust the scores obtained during the self-assessment questionnaire. The emotion-elicitation step is implemented using 35 emotion-elicitation pictures, which you can find in the replication package (ElicitationImages.pdf). Each picture is displayed for 10 seconds, with intervals of five seconds between them to allow the user to relax. The whole slideshow lasts for nine minutes. During the first and last three minutes, calming pictures are shown to induce a neutral emotional state, while during the central 3 minutes the user sees pictures aimed at triggering negative and positive emotions. The participant is then asked to fill a form to report the degree of arousal and valence they associated with the pictures on a visual scale from 0 to 100. These scores are used as a baseline to adjust the subsequent scores provided during the interview. We adjust the valence and arousal scores collected during the interview based on the mean values reported while watching the emotion-triggering pictures. Detailed information on the device calibration step is also provided in Section III-F of the paper.

neilernst commented 4 years ago

Thanks for all the changes. I am fine with the reusable badge now.

jinghui-cheng commented 4 years ago

Dear @danielagir, Thanks for all the changes!

For the calibration, it would be better to add a line in the protocol saying people need to adjust all future valence and arousal scores based on the mean values collected during the calibration phase, and then refer to the Section III-F of the paper.

I agree with a reusable badge once this point is added.

fabianodalp commented 4 years ago

@danielagir , please fix the suggestion by @jinghui-cheng and we are ready to award the reusable badge!

danielagir commented 4 years ago

@jinghui-cheng @fabianodalp we have added the note to the protocol as suggested. Thank you again for all your suggestions!

fabianodalp commented 4 years ago

Great, thank you @danielagir . Let me award the badges. Make sure you explicitly cite the DOI of your artifact in the paper, by the way! [I haven't checked that]