Open TheBeege opened 8 years ago
So, I have a few ideas on this. I was thinking that for peer reviews we could have a system similar to wikipedia where articles can be flagged by users on aspects of things that need to be addressed and or dealt with. This could be something like where the article seems to be weak in, for example the methodology section or the background review section. These flags could be aggregate scores and when the author releases or changes the version of the article the aggregate scores could be reversed (similar to what they do on iTunes when a new version of the app is released) though we could give the people looking at the articles access to the history of the versions (again, similar to what they do on iTunes). Essentially, this would help with the burden of assigning reviewers and make it a more crowd sourced medium. I will also start another topic on article feedback generation.
Should peer reviews be anonymous? Should peer reviewing be initiated by the app as a request for users to review articles, or can we expect potential reviewers to find articles to review themselves?
@DesireeVanHaute mentioned that, traditionally, editors select researchers to review articles, and reviews occur anonymously beyond that. Considering we want to keep this a low touch system, how can we remove or reduce the need for that step?