Partner mobile teams break Promoted metrics integration without realizing it. In addition, these mobile teams have different engineering infrastructure in place. This means that:
Automated testing may not be available.
Not all features, especially “invisible” features such as metrics collection, are tested prior to release.
Philosophy
Catch these problems as soon as possible. During development, before any code is merged, is ideal.
Normally we’d do this using automated tests, but our partners seldom have the testing infrastructure for this. We could try writing tests for them, but partners may not have the discipline to run tests/block on failing tests during development.
Make it apparent what the error is, and make it hard not to notice that you’ve broken something.
Proposal
Build failsafes into the Mobile Metrics SDK that trigger when the app is running under dev mode. Make these failsafes check at runtime.
Breaking scenarios to check for:
Log user ID is missing. (This implies initialization failure.)
Impression: No insertion or content ID in Delivery.
Action: No insertion, impression, or content ID in Delivery.
Make the logger class return errors/warnings as the result of the logging calls.
Validate the results of these calls further down the line when in dev mode. Ignore these results in production.
If any errors or warnings occur, interrupt execution of the app with a detailed explanation of what has happened.
Logging anomaly handling
Behavior on logging error/warning (exposed in ClientConfig as an enum property):
Nothing
Show UI
Break in debugger
Log entire content object (optional)
Have clear and explicit escalation path (ie. email alias help+blah@promoted.ai, Slack channel, etc).
Background
Partner mobile teams break Promoted metrics integration without realizing it. In addition, these mobile teams have different engineering infrastructure in place. This means that:
Philosophy
Proposal
Build failsafes into the Mobile Metrics SDK that trigger when the app is running under dev mode. Make these failsafes check at runtime.
Breaking scenarios to check for:
Make the logger class return errors/warnings as the result of the logging calls.
Validate the results of these calls further down the line when in dev mode. Ignore these results in production.
If any errors or warnings occur, interrupt execution of the app with a detailed explanation of what has happened.
Logging anomaly handling
Behavior on logging error/warning (exposed in ClientConfig as an enum property): Nothing Show UI Break in debugger Log entire content object (optional)
Have clear and explicit escalation path (ie. email alias help+blah@promoted.ai, Slack channel, etc).