sciencefair-land / sciencefair

The futuristic, fabulous and free desktop app for working with scientific literature :microscope: :book:
https://sciencefair-app.com
MIT License
603 stars 52 forks source link

p2p peer review #139

Open lukeburns opened 7 years ago

lukeburns commented 7 years ago

An issue to discuss implementation details of p2p peer review. I've documented some of my thoughts on what a p2p review process might look like (see https://github.com/lukeburns/peer-review) fwiw.

My initial thoughts on an implementation are to create modules in the hyper* ecosystem that include mechanisms for publishing feeds under a "publishing feed" that includes identity metadata and a collection of feeds (Would multifeed or dat-pki be helpful?) of publications and forwarded publications that would benefit from review, plus linking between feeds so that one can find reviews of a given feed (hyperdb?).

Is this at all like what you've been thinking?

blahah commented 7 years ago

@lukeburns thanks for opening this! Yes that sounds very close to what I've been thinking!

dat-pki in general is the plan for authenticated distributed group membership for feed creation and subscription, and an iterative peer review system is a particular way of structuring linked feeds with permissions and group membership.

Late here but I'll read your repo tomorrow and write up my thoughts.

LGro commented 7 years ago

@lukeburns what do you think about extending your model beyond pre-publication? While one common entry point would be authors requesting peer reviews before publishing another could be someone deciding to review already published work or work that is available on preprint datasources.

lukeburns commented 7 years ago

I see two approaches: linking to external sources or, better, replicating the external publication on the network as a static hyperdrive. The latter option is nice because it doesn't require the author to publish on the network for the paper to be reviewed and it helps ensure the availability of content by distributing it across peers.

LGro commented 7 years ago

@lukeburns, do I understand correctly that you propose the following workflow for a review?

  1. Reviewer or author copies publication to a new review hyperdrive
  2. Creator of the review hyperdrive shares it with the target audience (e.g. author, reviewer group, publisher)
  3. Reviewer adds comments to the hyperdrive alongside the publication
  4. Author comments on reviews / updates publication
  5. Optionally: The review hyperdrive creator makes it public and links the hyperdrive to the updated/released publication so people can see the review process
aschrijver commented 7 years ago

@lukeburns the P2P process is very interesting compared to the traditional process!

@Igro as I understand @lukeburns proposal, it is more or less like you say, but it is a more open and transparent process.

In step 2 the Author forwards to selected peers he/she knows, and should be involved Then there would be a step 2a where these peers in turn find other peers (and maybe they also forward again) that are valuable reviewers

In this way scientific work would become available to a larger group sooner, which in turn might lead to quicker validation and better feedback.

aschrijver commented 7 years ago

One other thing to consider:

I don't have much experience with scientific review processes in particular, but I've worked a lot with cms'es in a SaaS environment where you have e.g. content review processes.

Here we never had just one review type. We had many:

We used state machines (for the simple cases) and workflow engines (for the complex ones) to implement this.

Now in no way am I recommending you to include a workflow engine. Just saying you should think carefully what processes you are gonna support, now and in the future, and design accordingly so that adaptation and extension do not introduce too many breaking changes.

LGro commented 7 years ago

@aschrijver that 2a step you are mentioning, would you envision that as a restart of the whole process, starting again at 1 with a replication of the publication? I am sceptical about if/how it would be feasible to let non-owners extend the visibility of an encrypted hyperdrive, based on the dat-pki.

(Thanks all for demonstrating that a lower-case L is a really bad choice for my username. Also sorry @igro for the confusion.)

aschrijver commented 7 years ago

First of all, this was my interpretation of how things work, you'll have to ask @lukeburns to be sure. But ya, that's basically what it boils down to, I guess.

But not sure, because when allowing the review to spread out on an organically growing network of peers (peer-to-peer-to-peer-etcetera, which you don't control as original author) this raises the question how you know that all the vital reviewers are done reviewing (those that are required to participate, not the nice-to-haves)

aschrijver commented 7 years ago

Some more analysis based on my previous observation..

A review could be addressed to:

A review process could stop when:

This raises the follow-up question: How do you stop a review process, avoid people wasting time?

All these choices have (potentially significant) design impact and lead to further questions.

lukeburns commented 7 years ago

@lgro a minimal and fairly generic implementation might be a hypercore-archiver + a feed of the keys in the archive:

this says nothing about the structure of reviews (maybe comments on a hypercore feed or a hyperdrive with latex files). it also says nothing about how peers find each other, so it could work for open or closed review networks.

there are a couple issues with this proposal.

the propagation of reviews through the network might be too slow if it has to go through the same filtering process that publications do (through steps 1, 2, 3). one way around this might be to have all peers auto-replicate reviews. they could even auto-replicate publications if filtering is not necessary for the size of the network.

additionally, while it's important that peers be able to have filtration control, peers with no followers are unheard. one could implement a process by which to "push" messages to new peers (e.g. https://github.com/jayrbolton/dat-pki/issues/7), so that a reviewer can push a review onto the network, whether or not they are followed by other peers, or to send "follow requests." otherwise, a peer needs to find a "champion" who is already connected to other peers somehow and convince them to replicate their publication / review, which might be all one needs in a small review network (e.g. an undergrad student researcher on a network consisting of collaborators on a research program has a review replicated by their advisor).

a beefier implementation might use dat-pki and allow for selective sharing with groups or individual peers. i think this is a good first step that works with minimal dependencies.

lukeburns commented 7 years ago

@LGro so to actually answer your question, the key differences between what you said and what i'm imagining are how publications are shared and filtered and the structure of reviews (which i'm not settled on yet):

  1. peer (author or reviewer) publishes new hyperdrive, appends to their replication feed (i.e "submits a review request" to their peers), and optionally shares directly with peers (whether on- or off-network)
  2. their peers either (a) ignore, (b) forward to their peers, or (c) goes to step 1 to publish a response
  3. author makes revisions -- return to step 2
  4. (optional) publication or review hyperdrives are cited off-network
step21 commented 6 years ago

@lukeburns I think it depends on who you see as your audience. If you would like eventual adoption by academics, I think it would be best/easiest to start with 'traditional' peer review, where articles are submitted/reviewed and then eventually accepted by a 'journal'. Then, in addition to that there can be 'free for all' review anyway afterwards or before ppl could ask for input on work in progress like for example researchgate also offers. The reason for this is, that for the foreseeable future, people need to show that they have certain acceptable publications, such as when applying for jobs or just generally to establish credentials. This is easiest if a 'journal' curated by a group of specific individuals can build reputation and then I can show I published there. OTOH, if I would have to say 'well, 10 peers whoever they are, reviewed my work' without knowing who they are or not knowing their credentials, it's worthless for this purpose. I realise this doesn't address everything mentioned above or not in as much detail, but maybe sth to keep in mind.

lukeburns commented 6 years ago

@step21 the above proposal isn't free for all review, although you could do that. it works for arbitrary networks of peers, so you could easily put together a closed group of reviewers / editors for a journal using this. you could even do more interesting things like e.g. build a trusted network of reviewers underlying a consortium of overlay journals, so that the overlay journals have a consistent and reliable source of reviewers to tap into.

step21 commented 6 years ago

Sure. @lukeburns thanks for the clarification. That sounds really great. In general I just think that a lot of what I hear from #openscience or movement of 'against publishers etc' is just very much like 'free everything' without a replacement, or a very disorganized one. If projects like sciencefair actually want to include academics and give them a platform, I think it is important to consider these things. Like how best to include them, things like that. Also because if I am just a random Tech Guy with an opinion or something, I would just use a blog ;)