opensafely-core / reports

Website for viewing OpenSAFELY reports
https://reports.opensafely.org
0 stars 0 forks source link

Add a review step before publication of routine reports [wip] #286

Open brianmackenna opened 2 years ago

brianmackenna commented 2 years ago

Best practice on mandating review is in place in place across a lot of Bennett Institute websites. I propose that we introduce a mandatory review stage before publishing new/updated reports for reports.opensafely.org

This may include code review but at this stage I'm most keen on getting a sense check on charts, figures etc before they are made publicly available front and centre.

HelenCEBM commented 2 years ago

For reference here's our existing guidance on publishing reports https://bennettinstitute-team-manual.pages.dev/products/opensafely-reports/#publishing

HelenCEBM commented 2 years ago

Should the review be enforced through a technical solution e.g. a second person needs to sign in and approve before (a) first publication and maybe (b) every update? Or e.g. making an issue for review as per output checking? Just a slack discussion, or something else?

LisaHopcroft commented 2 years ago

The process of review for the vaccine reports could look like:

  1. review of format and static text using dummy data, including checking for:
    • adequate context and introduction (consider whether interpretable for journalists)
    • links to external documentation (e.g., The Green Book for vaccines)
    • any necessary caveats
    • links to other relevant Bennett repos/papers/reports
    • explanatory text for figures/tables (e.g., table footnotes)
    • all necessary breakdowns included (e.g., using ethnicity 16 where possible/appropriate rather than ethnicity 6)
    • statement of parameter values? (e.g., time between doses, though this is in the code)
  2. review of content (data, tables, figures etc)
    • sense checking: e.g., percentages/ratios not greater than 100 or 1
    • cross check with national data (check coverage rates and that ours are ~40% of national numbers):
    • output checking: including low numbers, unapproved outputs
    • code review
    • readability of figures (e.g., accurate axes, readable labels)
    • some elements may need to be revisited once real data are used, e.g.:
      • all necessary breakdowns included (e.g., using ethnicity 16 where possible/appropriate rather than ethnicity 6)
      • explanatory text for figures/tables (e.g., table footnotes)

Step (1) certainly needs manual review - clinical informatician + data scientist? Some of Step (2) could be handled automatically (e.g., by tests and automated output checking).

LisaHopcroft commented 2 years ago

A technical solution sounds good to me.

If we want an explicit record of checking each pre-defined element (like those provided above), could we use pull request templates, for the first release?

brianmackenna commented 2 years ago

wip https://docs.google.com/document/d/1bLZkwopTD4xccRgjtisEOr9w5MfI_jg558WnfpU73oA/edit

LisaHopcroft commented 2 years ago

The document has been fleshed out with further description of checks and processes.

What is missing so far is how (if?) we're going to record this process. I know that we want to make this light touch, but perhaps we could have an issue for each publication with a list of things to tick off? The issue could be generated using a template so that we don't forget anything.

LisaHopcroft commented 2 years ago

To keep this up to date, we now have three documents for:

And four issue templates that capture a checklist for:

Users are informed in the documentation and in the checklist preamble that they must provide a completed checklist before final sign-off is granted.