Closed amyhughes closed 3 years ago
We should keep checking that the alerts are being used, I suspect interest in these figures will dwindle with time.
Yeah it's a shame we didn't collaborate on this a year ago... probably could've freed up the data team a lot! Lesson for the future...
Basic question since I haven't looked at this repo before. Is the output of it just an email that says "check the website" when there's something noteworthy? it doesn't send a spreadsheet or store one somewhere?
Thanks Joe, yes, at the moment it's just an email to prompt the data team to check the site if the figures look concerning (interesting).
I'll take a look at better error handling, and making the table construction more readable.
What does this change?
Feedback from Pamela and Niamh was that we should only alert for
newCasesBySpecimenDate
if the number of cases per 100,000 is over 50. This change applies this threshold by comparing the number of cases to the population figures downloaded from the ONS. Note that we may need to update the url for population figures when new estimates are released in the autumn, I haven't done anything to automate fetching the latest figures since there seem to be slight differences between each release, so we will need to manually test the fetch if and when we switch anyway.How to test
Running locally we get reasonable looking results:
Some metrics have exceeded 100.0% change week on week:
Check https://coronavirus.data.gov.uk/
How can we measure success?
We should keep checking that the alerts are being used, I suspect interest in these figures will dwindle with time.
Have we considered potential risks?
Currently we log an error if we can't find the population figures in the ONS data for the reported
newCasesBySpecimenDate
for a given area, but we don't do anything to expose this to Pamela and Niamh. Running this locally we find a match for every area code, but it might be nicer if we returned a result with a warning about missing data in the email we send if this were to happen.