Closed krivard closed 4 years ago
wip_
detection is probably a dead end but the upgrade to 10.5 was a good ideaProbably a more substantial engineering effort to really address this.
K will follow up with George, Wael, Brian for a more detailed engineering convo about this
A possible way to go forward for the direction update is to compute the update for partitions of the data by the time-series key (source, signal, time_type, geo_type, geo_value)
, updating the direction one partition at a time. An added advantage for this is that we can then schedule the direction update for partitions at different times of the day.
Results from meeting:
@AlSaeed to modify direction computation to handle only one partition per ingestion cycle, as described above. @korlaxxalrok to provide data distribution information to help determine an effective partitioning function. We expect this to reduce both memory and running time, at the expense of timeliness.
@melange396 to modify covidcast table definition as specified below, then modify (1) CSV import to fill the is_wip column appropriately, and (2) meta cache update to handle one (source, signal) pair at a time. We expect (1) to reduce running time, and (2) to reduce memory requirements at the expense of some additional running time overhead.
Table mods:
This is a short-term fix to get us back on our feet so we can take our time and get the next one right: the long-term plan specified in #161.
Also add int columns:
whats the use case for those deleted/missing variables? will that come from a csv filename or a row value? or does it get applied in post-processing like the direction updater?
There are several circumstances that generate a missing value, and I'm still working on cataloguing them all (see also cmu-delphi/delphi-epidata#147 )
Some of them can arrive in the csv files during normal ingestion:
At least one of them cannot:
At some point we plan to have a unified API interface that invisibly handles both public + private queries; once that happens, I can also imagine returning "restricted access" if you ask for e.g. ZIP5-level data but don't provide an authentication key.
Partitioned directions are ready in cmu-delphi/delphi-epidata#161 for testing on staging
Meta cache update is nearly ready but failing some mystery unrelated tests; W to take a look.
Partitions are taking ~35 minutes apiece; test run without specifying the partition also took 35 minutes so not sure how to run all partitions.
Partition 0 is failing in both cases, as one time series with signal 'wip_confirmed_7day_avg_cumulativ'
doesn't have stdev value, which means it doesn't have any data before Constants.SIGNAL_STDEV_MAX_DAY
. The fix will depend on the intended behavior:
Constants.SIGNAL_STDEV_MAX_DAY
be acceptable fix? Or is failure the correct behavior in this case?Loads of options here, none of them particularly good.
Constants.SIGNAL_STDEV_MAX_DAY
to be -30 days from whatever day the direction updater is being run, so it will keep up with current events as they continue to shift. This is future proof, but it will mean that directions may mysteriously change over time. For example, maybe three months ago the reference stdev was low, and a particular span of 7 days was marked as "increasing"; today, with an additional 3 months of data being used to compute the reference stdev, that same span of 7 days might be marked as "steady". Keep in mind nothing currently consumes the direction signal other than one particular mode in the COVIDcast map, and that mode doesn't permit comparing what the direction was when you looked at it a month ago with what it looks like now... but if someone is confused by it anyway, is that something we are willing to accept?Quick fix:
Constants.SIGNAL_STDEV_MAX_DAY
gets NA as their direction. This means that new indicators (like Quidel COVID tests) that don't have data before 4/22 will never receive a direction -- so we should consider this fix a temporary one.Long-term:
Meta cache revision is available in cmu-delphi/delphi-epidata#167, but includes the column renames so we'll need to get the disk space upped before we can apply it.
Working on adding a column to track which issue is the most recent.
Provisioning storage for further testing, including the column renames.
DB schema: signal data type fix is done (varchar32->64); column renames in progress.
Once that's done, we'll update the direction code to use the new names, then merge the meta cache update changes and test them.
Setup branches are ready to merge;
Direction update remains to fix all the tests; conflicts expected because we're doing two column adds from two different branches.
All staging PRs have been merged; next is to deploy to staging and test.
is_most_recent_issue
update ran overnight
Remaining tests:
The current lay of the land:
[ ] csv upload: Testing today, will run on a large batch of files to make sure there are no major issues.
[x] direction update: Ran a test over the weekend on all partitions. It took ~970 minutes (16 hours). No memory errors, but this is still a long run time. I wonder if we can run this against multiple partitions at once. Seems to be about ~40 minutes/partition.
[x] meta cache update: This seems good! Completed in ~98 minutes, which seems to be a big improvement over the current run time of ~420 minutes.
Assuming no large issues with csv uploading, this will work as the stopgap fix until we do the major table reorganization.
To deploy to main:
is_current_issue
took 12 hours -- iterate on this in staging to figure out if this will affect API performanceThe fill_is_latest_issue
script is only meant to be run once to initialize the is_latest_issue
column. The integrity of the column is then maintained by the updated insert_or_update_batch
query.
For the performance of direction update, I suggest adding the following key, and benchmarking at least one partition:
ALTER TABLE `covidcast` ADD KEY `by_is_latest_issue` (`source`, `signal`, `time_type`, `geo_type`, `geo_value`, `time_value`, `is_latest_issue`)
Adding more detail to what a cut over might looks like:
Upgrade MariaDB - We may want to do this step separately from the others. Will give us time to fix anything that may come up. So maybe we do the database upgrade on one day, then the other stuff on the next.
Table changes
ALTER ONLINE TABLE `covidcast` MODIFY `signal` varchar(64) NOT NULL;
ALTER ONLINE TABLE `covidcast` RENAME COLUMN `value_updated_timestamp` TO `timestamp1`;
ALTER ONLINE TABLE `covidcast` RENAME COLUMN `direction_updated_timestamp` TO `timestamp2`;
ALTER ONLINE TABLE `covidcast` ADD COLUMN `is_latest_issue` BINARY(1) NOT NULL DEFAULT (0);
Merge all relevant changes to main
and deploy
Run the fill is_latest_issue
operation [ python3 -m delphi.epidata.acquisition.covidcast.fill_is_latest_issue
]
epidata.covidcast
while it is running, so we need to take that into consideration and probably run this over night while Automation is paused.Once all the changes are made we should be able to start Automation.
Follow on tasks would be to:
Once we know how long the is_latest_issue
fill will take then we can extrapolate for production. Probably we have nearly 2x the rows there.
Hmm... I ran the is_latest_issue
fill again on staging and it took ~237 minutes. Assuming it did the same amount of work as the first time it ran we might expect it to take ~400 minutes on prod (336m rows in staging vs. 574m rows in production).
Still need to check:
2-day migration event sounds wise.
Evidence suggests we'll have to upgrade replicas as well, ideally the same night we upgrade prod. Will do it with a script so a buddy should not be necessary, but will keep us posted.
Estimated schedule:
Adding more storage to prod tonight before we do the table renames, code push, and issue fill (probably tomorrow).
The storage has been added, so after Automation has a chance to right itself we can look at the final steps for this.
@krivard I have the tables renamed/created for this task. Next up is:
is_latest_issue
fill operationThe PR has reviewers, but I am not sure who might be available this week.
@krivard Ah! I see, didn't notice those were just suggestions for reviewers. So maybe this is ready to go. I'm not going to just merge it, but when you have a chance to double check it in some capacity we can merge it.
Is that the branch currently on staging, and is staging working as it's meant to?
Yeah, the branch has been deployed there and should have been what we tested with (87b1984
).
Looking back, we manually tested CSV upload, direction update, and the cache update.
I never updated re CSV upload in this issue thread, but in recall I ran it on a huge collection of files and it succeeded.
Direction update took forever (as expected).
We saw an improvement in the cache update. Our mileage may vary in the production env due to more than the 2x (and growing) data we have there.
I think we've done what we can for testing. Our staging env is not a super robust approximation of production, but the code ran there. We don't run an identical Automation flow so outliers could exist.
I think we can merge, deploy, and move forward (carefully), but I can definitely run another batch of tests if we think that is valuable.
Okay, I've double-checked that the API on staging returns data, and the meta queries work as well.
I can merge now, or wait for the morning if we're worried it will fail overnight.
I think we can go ahead and merge. I'll watch to see if we have any major issues.
We did #24, but now directions and meta cache updates take forever and run out of memory. Let's fix that.