Closed stacimc closed 1 year ago
Based on the high urgency of this PR, the following reviewers are being gently reminded to review this PR:
@AetherUnbound @obulat This reminder is being automatically generated due to the urgency configuration.
Excluding weekend[^1] days, this PR was updated 2 day(s) ago. PRs labelled with high urgency are expected to be reviewed within 2 weekday(s)[^2].
@stacimc, if this PR is not ready for a review, please draft it to prevent reviewers from getting further unnecessary pings.
[^1]: Specifically, Saturday and Sunday. [^2]: For the purpose of these reminders we treat Monday - Friday as weekdays. Please note that the that generates these reminders runs at midnight UTC on Monday - Friday. This means that depending on your timezone, you may be pinged outside of the expected range.
Fixes
Fast follow for Flickr backfill. Related to WordPress/openverse#1285.
Description
This PR attempts to handle situations where the Flickr API returns excessively large batches of data. The logic should be documented pretty thoroughly in the code, but here's a refresher:
The Flickr API will only return 4,000 unique records for any given set of query params; after that, it will just infinitely return duplicates. Consequently, we have to try to query the API in such a way that we get batches with less than 4,000 records each. Up until now, we have been doing this using the
TimeDelineatedProviderDataIngester
to break the day into small time intervals for ingestion.There is a limit to the granularity of data by time interval, though -- once you reduce the interval size to about 5 minutes, the number of records stays the same. For example, given a 5-min interval with > 4k records, if you search any 5 second interval within this range, you will still be returned >4k records. So reducing the timespan only works up to a certain point. However, we can still try to reduce the size of the result set by querying for one license type at a time, instead of all 8 license types at once.
This PR detects these large batch intervals during ingestion, adds them to an array for later processing, and skips ingestion for that batch. After 'regular' ingestion completes, each of these large intervals are reprocessed 8 times, once for each license type. It's still possible for a 5-min interval to contain more than 4k records for a single license type, but in this case there's nothing more we can do, so we process the first 4,000 results and then continue.
Notes
Testing Instructions
Try running the flickr DAG locally. In particular, run it for one of the days that failed in production using the DagRun conf options.
I tried a manual run with the conf:
This day failed in production after ingesting 7,253 records. Tested locally against this branch, the run succeeded after about 10 minutes and ingested 56,927 records.
Checklist
Update index.md
).main
) or a parent feature branch.[best_practices]: https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project#_commit_guidelines
Developer Certificate of Origin
Developer Certificate of Origin
``` Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 1 Letterman Drive Suite D4700 San Francisco, CA, 94129 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. ```