NASA-IMPACT / csdap-cumulus

SmallSat Cumulus Deployment
Other
1 stars 0 forks source link

Increase ingest/publish rate to 1K granules/min #340

Closed chuckwondo closed 5 months ago

chuckwondo commented 5 months ago

In conjunction, decrease discover/queue rate to try to make it roughly the same rate as ingest/publish so that messages are never in jeopardy of reaching the retention period of 4 days (AWS allows up to 14 days, but Cumulus does not allow this to be configured for the background job queue), no matter how large the collection is. If we can manage to have the ingest/publish rate equal to the discover/queue rate, we can ingest a collection of any size without concern because no messages will ever be in the queue for more than perhaps a few minutes.

However, since getting both rates to be identical is impossible, it would be better to err in favor of a slightly greater rate for ingest/publish because this would never allow messages to remain on the queue for more than a few moments. If we were to err slightly on the other side, where discover/queue is slightly faster, we would ever so slowly grow the queue. Given a large enough collection, even this slow growth would eventually lead to messages exceeding the retention period, but this would likely require a collection containing several million granules, perhaps at least 10M.

Also, make error handling a bit more robust to make sure we do our utmost to retry and if all else fails we make sure we record the error for Athena queries. There have been recent disrepancies between the number of errors we see in Athena and the number of granules with status "failed", where Athena appears to be missing failures. This may be due to the RecordFailure step not more reliably capturing and writing failures to S3.

Fixes #337