Symbolication implementation as it exists today is the bottleneck for ingestion throughput. Improvements in the following areas can positively impact ingestion throughput. These optimizations apply regardless of platforms, making ingestion faster for present and future platforms.
Don't work with large mapping files
Only symbolicate once
Implement ingestion queue
Don't work with large mapping files.
Mapping files are large. Only use a large mapping file once. Don't store them at all. Instead extract the useful parts from a mapping file with the intent to compress as much information in it while still being functional for the purpose of symbolication. Let's call it a reduced mapping file and we store only reduced mapping files.
Only symbolicate once
Right now, we symbolicate every single crash and ANR, instead symbolicate only if we have to. This might drastically reduce the load on the symbolicator services and improve ingestion throughput. Here's the high-level algorithm.
Compute a hash for each new crash/ANR
Check if a symbolicated result already exists against the hash
If found, replace the crash/ANR with symbolicated data
If not found, symbolicate and store the hash and computed symbolicated data
Repeat.
What about Versions and Architecture
We don't have to associate with versions and architecture to keep the logic simple and reusable. But, if there's a need for it, we can modify the approach accordingly.
Implement ingestion queue.
At present, the whole ingestion pipeline is synchronous, limiting our ability to parallelize ingestion. Instead, let's introduce 3 separate buckets. One will store unprocessed events. Second will store symbolicated and grouped crashes/ANRs and the third will store events that are ready to be flushed to ClickHouse. Here's the high-level algorithm.
Validate the incoming event batch
If validated, put it in the raw queue
Reply with a 202 Accepted to the client
An event worker takes events, symbolicates, groups them and puts them in processed queue
An ingestion worker takes these processed events and stores them at once to ClickHouse
flowchart LR
A[event arrival] -- put in --> B[raw queue] -- symbolicate and group --> C[processed queue] -- store in --> D[ClickHouse]
Additional notes
These queues should be fail-safe. If at any point, the system recovers from a crash, it should restore all data in the queues and let the workers transparently resume execution.
Queue progress should be tracked using OpenTelemetry Traces and Metrics. Furthermore, track the complete time it takes from the point of event arrival to event storage.
Event workers should not interface with any databases. Their responsibility is limited to carrying out only event transformations.
Ingestion workers should not carry out transformations. Their responsibility is limited to only storing data.
Ingestion workers should persist when queue size exceeds a configurable limit (like 1 million) or exceeds a configurable timeout, whichever happens first.
Summary
Symbolication implementation as it exists today is the bottleneck for ingestion throughput. Improvements in the following areas can positively impact ingestion throughput. These optimizations apply regardless of platforms, making ingestion faster for present and future platforms.
Don't work with large mapping files.
Mapping files are large. Only use a large mapping file once. Don't store them at all. Instead extract the useful parts from a mapping file with the intent to compress as much information in it while still being functional for the purpose of symbolication. Let's call it a reduced mapping file and we store only reduced mapping files.
Only symbolicate once
Right now, we symbolicate every single crash and ANR, instead symbolicate only if we have to. This might drastically reduce the load on the symbolicator services and improve ingestion throughput. Here's the high-level algorithm.
What about Versions and Architecture
We don't have to associate with versions and architecture to keep the logic simple and reusable. But, if there's a need for it, we can modify the approach accordingly.
Implement ingestion queue.
At present, the whole ingestion pipeline is synchronous, limiting our ability to parallelize ingestion. Instead, let's introduce 3 separate buckets. One will store unprocessed events. Second will store symbolicated and grouped crashes/ANRs and the third will store events that are ready to be flushed to ClickHouse. Here's the high-level algorithm.
202 Accepted
to the clientAdditional notes
These queues should be fail-safe. If at any point, the system recovers from a crash, it should restore all data in the queues and let the workers transparently resume execution.
Queue progress should be tracked using OpenTelemetry Traces and Metrics. Furthermore, track the complete time it takes from the point of event arrival to event storage.
Event workers should not interface with any databases. Their responsibility is limited to carrying out only event transformations.
Ingestion workers should not carry out transformations. Their responsibility is limited to only storing data.
Ingestion workers should persist when queue size exceeds a configurable limit (like 1 million) or exceeds a configurable timeout, whichever happens first.
Present ingestion rate for future comparison.