Closed bsweger closed 4 months ago
Recording some numbers from the recent test of getting a forked version of the CDC's FluSight to the cloud.
Forked repo: https://github.com/bsweger/FluSight-forecast-hub/tree/main S3 bucket: bsweger-flusight-forecast (these will disappear once we're done testing)
number of model-output files
find model-output -type f | wc -l
)aws s3 ls s3://bsweger-flusight-forecast/raw/model-output/ --recursive --no-sign-request | wc -l
)aws s3 ls s3://bsweger-flusight-forecast/raw/model-output/ --recursive --no-sign-request | wc -l
)Before diving into a more detailed kind of integrity check, will run down why we're missing a file in coverted model-output folder (and we need an issue to track getting alerts out to the team when the transform lambda fails).
The "missing" file is actually a README.md that wasn't converted to parquet. Granted, we should decide how we want to handle non-supported file types, the end result--at least from a file count perspective--is as expected.
This exercise resulted in 2 hubData issues we should resolve:
To run some integrity checks that compare a hub's GitHub-based model-output files and the transformed versions of those files, I worked-around the above issues by:
admin.json
config to add .parquet
as a valid file formatbsweger-flusight-forecast/model-output/FluSight-baseline_cat/2024-03-02-FluSight-baseline_cat.parquet
)Below is the R script to run some integrity checks: test_cloud_hub_data.txt
Console output from running the above:
Rscript test_cloud_hub_data.R
Warning message:
! The following potentially invalid model output file not opened successfully.
/Users/rsweger/code/FluSight-forecast-hub/model-output/FluSight-baseline_cat/2024-03-02-FluSight-baseline_cat.csv
SubTreeFileSystem: s3://bsweger-flusight-forecast/
[1] "Comparing local and cloud row counts"
[1] TRUE
[1] "Comparing local and cloud row counts by model_id"
[1] TRUE
[1] "Comparing local and cloud schemas"
[1] TRUE
AWS handled the "bursty" lambda function invocations successfully, though there was some throttling due to what appears to be a currency limit of 10. The image below represents the default content on the "monitoring" tab of the AWS Lambda console (showing lambda activity between 2024-05-15 01:02:00 and 2024-05-15 01:13:00 UTC, which is when the incoming test model-output files emitted the S3 events that trigger the lambda function)
Am not an expert in these charts, but adding some additional info after image:
The concurrency threshold of 10 for our lambda function may be because our AWS account is new: https://benellis.cloud/my-lambda-concurrency-applied-quota-is-only-10-but-why
Gonna move this to done, now that we've onboarding the CDC's FluSight repo to the cloud. The archived FluSight data will have far more volume, but we can open new tickets if getting that onto the cloud surfaces additional isseus.
Our GitHub action workflow to send data to S3 has been working well in our tests of small data volumes. However, it would be useful to get some rough estimates on timing with large volumes of data (one reason: @lmullany is working to convert some archived hubs to hubverse format, and it would be great to make that data available on S3)