Open chris-huggins opened 5 years ago
Goal to understand what they do when "QCing submissions".
Detailed findings and user test results and summary of key issues
Check for duplicate submissions
Check for authors that are Editors
This has potential for human error as it is tedious and easy to enter typos/miss editors. It is also tedious and takes up most of the initial QC time.
Check basic things are there
Hopefully validation will catch things 'missing'. We would still likely need them to check the manuscript looks 'normal' and includes figures but should be the bulk of initial QC time? We will add instruction to authors to ensure figures are included in the PDF and not added as supporting files (yet).
Check manuscript file size For some reason they then check manuscript file size and ask the author to change it if its too large (or just compress it themselves sometimes). Is this because of eJP limitations? We already say "Files over 10MB may experience delays" or something along those lines so should be fine as long as export to eJP works.
Send back to author Due to the way eJP works, if there is anything wrong they send an email to the author to request them to update their submission. They copy/paste appropriate text from the "protocols" doc to compose the message. It's not clear how they know to do it, but they then check the author has addressed their "send backs" and if not they "send" the submission to the editorial team via Sticky notes - using them to flag that this submission needs attention. After this it's out of their hands.
If no send backs they assign a Deputy Editor and it is done.
Editorial team have to do regular checks in the submissions list for anything with a sticky note attached. This allows them to be aware of submissions flagged by Ed office. This is inefficient and relies on knowledge of sticky note colour coding
Alert automatically when there are duplicates
Authors self-declare editors
Better sticky notes
Sticky notes are used to share the submission with Editorial, flag anything to other people or teams and generally communicate with anyone. It may be more effective to use other features for these. The current ideas requiring testing are :
Labels: add labels to submissions to communicate to anyone looking at it anything important e.g. Author is Editor, re-submission etc.
Transfer: Send the submission a specific person or team with a message to add information - no need for them to hunt through the list. This would also make them aware of a submission immediately (or add it to a work queue) instead of relying on them checking a list every x days.
Note: There are other findings from this visit that will affect full submissions and revised submissions. This research should be revisited at the appropriate time.
@hdrury1 @gmaciocci Have we established yet how we see this next QC milestone playing out?
I added a ticket for a need to clarify technical feasibility with eJP, but my understanding is that we aim for a deployment where features articles remain in xPub until QC is passed, while Research articles and other types continue "as-is".
My hope is the workflow will eventually pan out something like this:
My expectation is that we need to export "passed" submissions to eJP the same as we do initial submissions currently. This would mean somebody will need to manually do something in eJP to features to ensure they get to the full submission stage. If we've done things correctly no feature article submissions should make it to eJP that haven't been invited to full submission by Peter. eJp won't know this so we have to work out "how" we manually skip QC in eJP.
I've added a ticket for tech investigation into what impact this has on exporting to eJP, but we also need to look into what needs to be done manually in eJP.
Is this more or less how you both see it working? Do we need to break it down further? e.g. no initial decision in xPub. I think we discussed assigning Peter Rodgers automatically for features but this would skip QC altogether?
I think we've currently defined "QC" and "Features workflow" as different milestones, but I assume we want to use feature articles for the testing of the "QC" milestone features. Also worth noting that our Ed Office visit established more requirements for "doing initial QC checks" than just "Commenting".
Meeting on Wed 12 Jun 2019 with Giuliano, Peter Rodgers, Susanna, Chris H, Hannah and xPub devs
Intended to discuss options around how we build and release "a features workflow". The outcome was essentially to continue as planned after exploring some other options. There are still questions over how to break down the "initial submissions" into separate releases, if at all.
The main concern being "when" we export to eJP, and what work, if any, has to be reproduced. The current assumption is that eJP will not be willing to help (e.g. by importing additional properties such as labels or assigned editors).
Therefore this milestone would initially result in a workflow of:
We can then consider introducing other features into xPub, such as labelling duplicates or where authors are editors, linking submissions, labelling for collections etc. But any work might have to be reproduced in eJP as stick labels, so may only make sense to test a small sample (if at all).
There is then potential for a later release where we handle initial consultation and assessment in xPub, asking the Ed Office to manually change important submissions to a later stage to invite the author to full submission.
@hdrury1 Full Workflow diagram (for initial submission process, not just QC)
Overview of the features that make up the "QC milestone", which would be used to QC all article types, eventually by Editorial office:
Ideally we would build and test the xPub work from initial submission through to a final decision (not including revised submissions for now but including peer review, consultation etc.) for feature articles.
This would allow us to build and test essentially a "research article lite" version without requiring our Editors to work across both untested xPub software and eJP.
The first stage of that is building the quality control check feature(s), which will initially be tested with features articles.
Early tasks to explore this further