Open ber4444 opened 7 months ago
Thanks @ber4444 yes this is a good idea. It will take some care to get built correctly, but I hope that leveraging the upstream disk-buffering might also be able to help here.
We can use enableDiskBuffering
, so ideally the way it would work:
Is it how it already works?
We can use
enableDiskBuffering
, so ideally the way it would work:
- launch app with old, expired token, initialize sdk, buffer events to disk, upon receiving an error, keep the buffer alive
- relaunching the app with a new token will send all events from disk buffer
Is it how it already works?
I think so, yeah that's the general idea. There are default limits to be aware of, however -- both the amount of data stored on device storage and the number of export attempts per file (20 is the default, see FileSender
).
Some of the disk buffering is likely to have changes as well as we move away from Zipkin and closer toward the upstream/OpenTelemetry based OTLP disk buffering solution.
Thanks, and FileSender is package private so defaults cannot be overriden.
Thanks, and FileSender is package private so defaults cannot be overriden.
Oh yeah you're right. It looks like we haven't surfaced any of that configurability yet.
We are rotating our Splunk token on a regular schedule per infosec recommendation. So the app is pulling the token from Firebase remote config. The issue is that right now we need to delay Splunk SDK initialization until the remote values are read (from the server or from disk cache, both of which are async operations). Please add a way to allow the Splunk SDK to be initialized without a token, let it accumulate events, and then when we later provide a token, it would send the cached events in a batch back to your Splunk instance.