braintree / braintree_ios

Braintree SDK for iOS
https://developer.paypal.com/braintree/docs/start/hello-client/ios/v5
MIT License
557 stars 291 forks source link

[QL] Measure API Request Latency for PayPal Flows #1292

Closed jaxdesmarais closed 2 months ago

jaxdesmarais commented 2 months ago

Summary of changes

Checklist

Authors

List GitHub usernames for everyone who contributed to this pull request.

scannillo commented 2 months ago

Based on this diagram from Apple’s docs, maybe requestStart isn’t what we want, but rather fetchStart?

The DNS lookup, TCP & TLS setup aren’t included in requestStart.

jaxdesmarais commented 2 months ago

Based on this diagram from Apple’s docs, maybe requestStart isn’t what we want, but rather fetchStart?

The DNS lookup, TCP & TLS setup aren’t included in requestStart.

Nice find, updated here: https://github.com/braintree/braintree_ios/pull/1292/commits/7ffd8ab6c1a473264a6e4ce9d090dc859b260c99

scannillo commented 2 months ago

One thing came to my mind - once the batching logic is in place, maybe we should just send this analytic event for all API requests (not just PP)? Since our batching is time based and not based on the # of events, I don't think it will have too much impact on consuming too much networking bandwidth.

Do we think those metrics would also be useful to have?

scannillo commented 2 months ago

@jaxdesmarais following up here - we talked IRL about merging this in anyways, even if batching isn't complete. Do we want to run a few comparison tests using OS Signpost to make sure this change isn't adding too much additional time?

Getting actual data for these analytics from real merchant usage will help us a ton

jaxdesmarais commented 2 months ago

@jaxdesmarais following up here - we talked IRL about merging this in anyways, even if batching isn't complete. Do we want to run a few comparison tests using OS Signpost to make sure this change isn't adding too much additional time?

Getting actual data for these analytics from real merchant usage will help us a ton

For sure, I am using branch api-request-latency-signpost for testing. Results are on cellular:

Results on main:

So it looks like on average this would add 85.59ms of latency without batching. Curious if your results are the same.

scannillo commented 2 months ago

Here are my results. I ran them on WiFi for more consistency b/w trials:

Screenshot 2024-05-14 at 1 24 27 PM

jaxdesmarais commented 2 months ago

Are we okay with that variance on stable vs slightly less stable connections to merge this in/release before batching? I agree the sooner we can get more data on these hops the better.

jaxdesmarais commented 2 months ago

For reference here are my local tests with merging this branch into batch-analytics locally:

jaxdesmarais commented 2 months ago

Discussed with @scannillo irl and removing the "do not merge" tag as even in the worst case our latency is minimally impacted. This will allow us to get additional merchant data on these hops and we will fast follow with batching.