Made the code a bit more DRY. This makes the code a bit more clear and less error prone (previously we had a data race on tracers and sinks due to the lack of a lock, which is the lock that protects tracer/sink registration, not the actual event dequeue).
This sets up a future where the trace aggregator can be refactored to write out to a bigger in-memory cache before batching the IO writes.
Made the code a bit more DRY. This makes the code a bit more clear and less error prone (previously we had a data race on tracers and sinks due to the lack of a lock, which is the lock that protects tracer/sink registration, not the actual event dequeue).
This sets up a future where the trace aggregator can be refactored to write out to a bigger in-memory cache before batching the IO writes.
Related: #38