We want to build an interactions module for starfish that shows poorly performing interactions in your app. To facilitate this, we need a couple new interaction metrics, and to be able to meaningfully tie interactions to a specific screen to provide the user context of where this interaction occurs in the app.
Interaction Metrics
We want to split the problem into two metrics: one that indicates immediate responsiveness of your app ("INP"-type metric) and another that tells you when meaningful content has been painted after an interaction.
Interaction to Next Paint-type metric
Captures time to next paint after an interaction. This differs from INP in that we want to capture it on every interaction (as opposed to tracking it as a page's responsiveness metric).
The measurement ideally starts at the touch up event, includes work executed by the callback handler and either ends when main thread is idle or (possibly more accurately) when the next screen renders (if we have access to next frame render time).
Ideally, this measurement would also be auto-instrumented and work OOTB.
Time to meaningful content painted
The time it takes for effects (http, db requests etc) of the interaction to be executed and UI to be updated, ie the time taken to render the screen that appears after a loading state and new data is populated.
It seems like some level of manual instrumentation will be required here - the user needs to tell us what they consider a meaningful paint. We brainstormed a couple options here but open to other suggestions too!
Add helper functions/wrappers for common components (like list, table view) that tells you when a render is complete. We can do it for common UI patterns that typically wait on async task completion before render. If we can get about 70% of coverage with this approach, we can provide an api for the user to mark UI render as complete for the rest.
Another possibility is to provide the ability to start a measurement span inside the click handler of a button and the user calls a method to end the span when the render is complete. The measurement extracted by this span is associated to the ui interaction.
Interaction Context
Associate interaction to screen
For an interaction module to be useful, it is necessary to provide the context in which the interaction happens - the screen. We want to be able to filter all the interactions that happen on a screen. Do all mobile SDKs follow this pattern that the screen can just be extracted from the transaction name?
What is the "screen" is in this case - is it just the parent View Controller or Activity that contains the component that was interacted with? Is it the root view controller/activity currently on screen (I see that we have this information in breadcrumbs)? What makes the most sense? These are just open-ended questions to figure out what is a reasonable way to group interactions!
Description
We want to build an interactions module for starfish that shows poorly performing interactions in your app. To facilitate this, we need a couple new interaction metrics, and to be able to meaningfully tie interactions to a specific screen to provide the user context of where this interaction occurs in the app.
Interaction Metrics
We want to split the problem into two metrics: one that indicates immediate responsiveness of your app ("INP"-type metric) and another that tells you when meaningful content has been painted after an interaction.
Captures time to next paint after an interaction. This differs from INP in that we want to capture it on every interaction (as opposed to tracking it as a page's responsiveness metric).
The measurement ideally starts at the touch up event, includes work executed by the callback handler and either ends when main thread is idle or (possibly more accurately) when the next screen renders (if we have access to next frame render time). Ideally, this measurement would also be auto-instrumented and work OOTB.
The time it takes for effects (http, db requests etc) of the interaction to be executed and UI to be updated, ie the time taken to render the screen that appears after a loading state and new data is populated. It seems like some level of manual instrumentation will be required here - the user needs to tell us what they consider a meaningful paint. We brainstormed a couple options here but open to other suggestions too!
Add helper functions/wrappers for common components (like list, table view) that tells you when a render is complete. We can do it for common UI patterns that typically wait on async task completion before render. If we can get about 70% of coverage with this approach, we can provide an api for the user to mark UI render as complete for the rest.
Another possibility is to provide the ability to start a measurement span inside the click handler of a button and the user calls a method to end the span when the render is complete. The measurement extracted by this span is associated to the ui interaction.
Interaction Context
For an interaction module to be useful, it is necessary to provide the context in which the interaction happens - the screen. We want to be able to filter all the interactions that happen on a screen. Do all mobile SDKs follow this pattern that the screen can just be extracted from the transaction name? What is the "screen" is in this case - is it just the parent View Controller or Activity that contains the component that was interacted with? Is it the root view controller/activity currently on screen (I see that we have this information in breadcrumbs)? What makes the most sense? These are just open-ended questions to figure out what is a reasonable way to group interactions!
RFC
No response
Slack-Channel
discuss-starfish
Notion Document(s)
Interactions in Mobile
Stakeholder(s)
https://github.com/orgs/getsentry/teams/team-starfish @alexjillard
Team(s)
Mobile