cmu-delphi / covidcast-indicators

Back end for producing indicators and loading them into the COVIDcast API.
https://cmu-delphi.github.io/delphi-epidata/api/covidcast.html
MIT License
12 stars 17 forks source link

Consider ingesting Corona Data Scraper or COVID Tracking testing data #112

Closed capnrefsmmat closed 4 years ago

capnrefsmmat commented 4 years ago

The Corona Data Scraper project produces the testing data that's used by Covid Act Now. They seem to have testing data for specific counties they scrape. They're also connected to the COVID Atlas.

We should investigate their testing data and see if they report enough counties to make it worthwhile to ingest.

Meanwhile, COVID Tracking has an API for state-level testing data, which forecasting is already using even though it's not in our API.

jingjtang commented 4 years ago

Here is an EDA for the number of the locations available in terms of the testing data. CDS_testingvolume_countylevelCDS_testingvolume_statelevel

3189 unique counties in total. (including PR) 731 unique counties available for testing data.

krivard commented 4 years ago

Jingjing confirms that the number of unique counties is in the 3k range, so we'll go ahead and make this a new indicator

jsharpna commented 4 years ago

Here is the list of counties that we currently forecast... fips_pred.xlsx

krivard commented 4 years ago

Coverage over all states at state level, but not all states are represented at the county level. Publish both, but note in the Limitations section that not all states provide county-level information.

krivard commented 4 years ago

Proposed signals, after talking to Roni:

[1] in a perfect world this would be the same as case counts, but at the moment we have two sources of cases data that give slightly different values. we'd rather get this value then from the same source as we're getting the denominator from.

jingjtang commented 4 years ago

Add a pipeline here. Haven't compared Number of positive tests[1] with JHU or USA-FACTS yet.

krivard commented 4 years ago

Correlations are next!

krivard commented 4 years ago

Comparison to JHU is available here; a variety of different kinds and magnitudes of discrepancies but it's faithful to what CDS is publishing so we'll go forward with it as an indicator.

jingjtang commented 4 years ago

Correlating CDS pct_positive from 2020-07-10 to 2020-07-13 against jhu-csse: confirmed_incidence_prop averaged from 2020-07-17 to 2020-07-20.

krivard commented 4 years ago
jingjtang commented 4 years ago

Unassigned cases were added to the raw dataset. The pipeline needs to be updated. Wired dates exist in the raw dataset. For example, today is 07-30, but they provide data for California (only) for 07-31. I have already posted an issue to their github repo.

Corona Data Scraper migrated from old cds to li(a new name?) this week. As mentioned in the engineering team meeting, they changed the format of the data and the location metadata json file. It seems there are more problems happen to their new release. One is stated above, the other important one is mentioned by another group here.

We might want to hold this source off temporarily until the data becomes more stable.

jingjtang commented 4 years ago

Source: cds-test

Signal Names: Start from 2020-02-20

Start from 2020-03-03

Start from 2020-03-07 (restriction: tested ≥ 50)

krivard commented 4 years ago

CDS closed a bunch of issues 24 hours ago, so we can return to work on this signal.

New bugs, but that's to be expected.

jingjtang commented 4 years ago

CDS closed a bunch of issues 24 hours ago, so we can return to work on this signal.

New bugs, but that's to be expected.

Bug fixed.

Set export start date for confirmed_ signals to be 2020-02-20 Set export start date for tested and pct_positive to be 2020-03-15

For pct_positive: #tested incidence num>= 50 imageimage

For tested signals until the most recent days: ~126 MSAs available. 20200809_msa_list.xlsx ~643 Counties available. 20200809_county_list.xlsx

ryantibs commented 4 years ago

@jingjtang @krivard I'm so happy we're pursuing this! I was about to create a new issue exactly on this, but wisely I decided to search first.

Can I get an update on where we are in terms of finalizing these signals?

krivard commented 4 years ago

Re-run correlations just to double-check that the changes in data format after the last correlations run didn't trash the signal. If it's still good, we can publish in the next release.

jingjtang commented 4 years ago

They updated their location metadata again. I updated the supporting mapping files in the pipeline, but noticed a new problem.

According to the comparison, I did not change the pipeline: generate the state level report based on county level data for confirmed cases. The only special case is for state level tested where we use the state level data directly.

The correlation analysis results are shown below: Correlating jhu-csse: confirmed_incidence_prop averaged from 2020-08-17 to 2020-08-20 at county level agains cds: confirmed_incidence_prop from 2020-08-10 to 2020-08-13

Correlating jhu-csse: confirmed_incidence_prop averaged from 2020-08-17 to 2020-08-20 at county level agains cds: pct_positive from 2020-08-10 to 2020-08-13

jingjtang commented 4 years ago

They have tested data for Puerto Rico at county level but not at state level. imageimage

krivard commented 4 years ago

It looks like the correlations are still good, so we should go ahead and schedule this for inclusion in the 1.9 release. The instability of the location metadata is annoying, but we've accounted for the need to document small tweaks like that by publishing a changelog for the API (https://cmu-delphi.github.io/delphi-epidata/api/covidcast_changelog.html). I think we can handle ongoing changes through that mechanism, we just need to be prepared to check the signal regularly and implement fixes.

Next deadlines:

RoniRos commented 4 years ago

@krivard @jingjtang I just reviewed the data in CDS "latest" csv file. There were #tests reports from ~11 states, which I compared with the JHU reports for these states: OR, NY, FL were identical or almost so, which I think means that CDS took it from JHU, yes? IL, MA, NH, TN were maybe 15-20% off, some up and some down. CO, ND were way off, with JHU reporting 50% more tested for CO and 150% more for ND.

I assume you studied the discrepancies so this may not be new to you, I just wanted to point them out just in case. I tried to capture them in color in the following spreadsheet: cds-latest.xlsx

jingjtang commented 4 years ago

@RoniRos Yes. According to their code, they take the cases/deaths data from JHU-CSSE/NYTimes/xx.gov and cross check them. Most of them are the same as JHU's which is expected. However, Corona Data Scraper made their own decision after cross checking which might be the reason why there are discrepancies for some states. Here is an example.

RoniRos commented 4 years ago

@jingjtang @krivard Yes, I see that, thank you. But the discrepancies in CO and ND as so huge that I think they are worth investigating specifically. E.g. in ND it's ~220,000 vs. ~540,000. I just searched ND dept. of Health and found this dashboard. From it, it is clear that that the 540k measures tests, whereas the 220k measures individuals. Can you investigate similarly CO? And the other states that have a 15%-20% discrepancy?

jingjtang commented 4 years ago

@RoniRos For CO , 797,493 measures people, 1,153,853 measures tests, which means CDS also reports test at individual level.

RoniRos commented 4 years ago

Thanks @jingjtang , that's good progress. At today's team lead meeting we decided we want to move forward with the '#tested' and 'pct_positive' signals, but only after making sure we know exactly what they measure. Can you please do the same thing you did for CO, also for:

Do you have time to do this today? If so, we could make a go/no-go decision by tomorrow, while Kathryn is still here. If not, that's fine, we can make the decision next week.

RoniRos commented 4 years ago

@ryantibs @krivard : @jingjtang and I conferred further this evening. Here is my understanding of the current state of (global) confusion about what types of testing are being reported:

What we are going to do:

If it turns out CDS consistently reports people, we can move ahead with (appropriately named) signals.

Otherwise, we need to decide what we want to do: My view is that we should not contribute to the confusion, but rather work to resolve it. I think we should support multiple signals, one for each type of measurement for which there are a meaningful number of reporting locations. We should not provide a single %positivity signal that is based in some locations on #people and in others on #tests. As we saw, there is a huge discrepancy between them, so any comparison will be highly misleading. We could explore ways to harmonize estimates from different types of counts, but that sounds like a research project.

The modeling group can still decide to use the 'hybrid' %positivity estimates. As Jingjing showed, they still have high correlation with incidence rates. I just don't think we should publish them without being able to explain exactly what they are.

Your thoughts welcome.

krivard commented 4 years ago

There's a dichotomy here between faithfully reproducing data from some source (like CTP or CDS) and publishing definitionally pure signals.

If we want to publish a Corona Data Scraper mirror, we should mirror CDS, warts and all.

If we want to publish a #tests or #tested signal, we probably will not be able to do that by mirroring a single source, and may have to consider a fusion signal. We already do this with combining the more-reliable USAFacts cases/deaths with the less-reliable-but-includes-Puerto-Rico JHU cases/deaths.

I want to emphasize that in both cases, we can explain exactly what the signal is -- in the former, it's an exact mirror of CDS. We should 100% call out that CDS reports #tests in some regions and #tested in others, and that this prevents meaningful comparison between regions. We should probably not include CDS in the map. Beyond that, we should direct people to the CDS documentation.

The next question to resolve then is: Do we want to publish a CDS mirror at all? Do we have internal or external users who want it? cc @ryantibs

jingjtang commented 4 years ago

@RoniRos @krivard @ryantibs [I want to post this as early as I can so that Katie can get some sense today. Information in the sheets that I made could have mistakes. Some government websites provide unclear descriptions to their data. ]

After investigating the dashboards of states and counties:

RoniRos commented 4 years ago

I like the idea of adding value by figuring out what's what and then publishing synthesized signals, e.g. one for each type of counting methodology, and one hybrid that covers a superset of the locations and maybe does some harmonizing. Obviously this will require more investigation and thinking, so it should be discussed and put somewhere on our priority stack.

As for mirroring CSD, let's wait for @ryantibs's answer and CAN's answers.

ryantibs commented 4 years ago

Sorry for the late reply here. I think Roni and I already talked about it this morning and discussed the value of mirroring, but, let me know if you still need me to weigh in on anything else.

RoniRos commented 4 years ago

We decided we would like to go ahead with mirroring CDS for now.
In the longer term it is important to have more coherent signals (e.g. separately for #individuals-tested, #tests-performed, etc.). C.A.N. is actively working on this now with Velorum, so we should probably not duplicate their work.

krivard commented 4 years ago

If we're mirroring, then we should not construct or publish a pct_positive signal.

RoniRos commented 4 years ago

@krivard : That's correct.

@ryantibs : What is your thinking following our discussion with Igor? Do you think we should go ahead with simple mirroring for now? And/or do the one week project with CAN?

RoniRos commented 4 years ago

Based on the team leads discussion this morning, we should put CDS mirroring on indefinite hold. Should this issue be suspended?

krivard commented 4 years ago

Yes -- I'll close it for now, and we can reopen at a later date if we find it is needed.