postman-open-technologies / lifecycle

Defining the API lifecycle in a modular, reusable, and machine readable way that can help others learn about what the API lifecycle could be, as as define their own evolving API lifecycle.
https://apis.how/products/web-design/
Apache License 2.0
15 stars 9 forks source link

Team Performance #268

Open kinlane opened 2 years ago

kinlane commented 2 years ago

This is a discussion to move forward with the blueprint to define team performance, building upon existing metrics like DORA metrics to help define what team performance is, but also begin to measure, report, and begin to steer things in the desirable direction.

Reference: DORA metrics are used by DevOps teams to measure their performance and find out whether they are “low performers” to “elite performers”. The four metrics used are deployment frequency (DF), lead time for changes (LT), mean time to recovery (MTTR), and change failure rate (CFR).

What is the consideration when mapping this to a platform approach to the API lifecycle, thinking about the specifics of how organizations are producing APIs, as opposed to other types of software?

kinlane commented 2 years ago

Could use some help mapping these standards metrics @kevinswiber @meenakshi-dhanani @arno-di-loreto -- leave all random thoughts here.

kinlane commented 2 years ago

Additional questions to consider:

kinlane commented 2 years ago

Current blueprint deck (available on README of this repo too)

kinlane commented 2 years ago

Charity Majors’ Recipe for High-Performing Teams

kevinswiber commented 2 years ago

A 5th metric was added in the 2021 DORA report:

https://cloud.google.com/blog/products/devops-sre/announcing-dora-2021-accelerate-state-of-devops-report

kevinswiber commented 2 years ago

Another significant report regarding how to measure team performance is The SPACE of Developer Productivity by Dr. Nicole Forsgren, et al.

kinlane commented 2 years ago

So if we were take this list of elements/metrics considered in team performance:

How do we map to a platform approach supported by multiple vendors:

Then how do we provide coverage across this spectrum with PlatofrmOps collections + native integrations?

arno-di-loreto commented 2 years ago

Starting a Twitter thread with my random thoughts inspired by this question https://twitter.com/apihandyman/status/1525091655866540032?s=20&t=X27AWmy8DHFKI8hZTCHQHQ

kevinswiber commented 2 years ago

Focusing on a vendor map might bury the lede.

Team performance starts with culture. Culture informs process. Process requires iterative improvement. There are many formulas for iterative improvement, and yet a high level, most of them go a little like this:

A platform approach has a number of benefits for high performing teams. One area where it shines is in centralizing key processes and making them observable.

Back to tooling, can we actually pull the observations we need to ascertain current maturity and ensure meaningful results from running experiments?

As an organization, while tools like GitHub, Slack, and Postman feel extremely effective at improving team performance, how do we access the information we need to know they're effective?

kinlane commented 2 years ago

I like it. Agreed. Helps me stay high level on this. II am just looking for one or two layers deep right now, not the actual implementations until we know more. Have a scaffolding to think about. Start with the people, but then also the enablement through tooling. Gonna process Arnaud's tweets too, and think about it some more. Thanks, y'all!!

meenakshi-dhanani commented 2 years ago

I asked my friend how their team(works for a multi-service platform ~ WeChat for China) tracks metrics - his first thought was OKRs. They do also keep a check on DORA. He also mentioned that they look at SLIs and SLOs - Service Level Objectives. Initially, when teams know what they want to track for instance error rate, but they don't know the threshold they want to monitor, they set certain objectives, and check how their APIs perform today to set a threshold for performance. His team was at the stage of setting these SLOs. More about SLOs, SLIs, SLAs - https://www.atlassian.com/incident-management/kpis/sla-vs-slo-vs-sli

Again the metrics we want to track for high performing teams might depend on the nature of the team eg. DevOps, Squad? Also, will these metrics be considered as a bar for measuring individual performance? Are the metrics to understand team velocity?

At ThoughtWorks, our BA wanted to track the time taken for a user story (feature) to go from Analysis to Production, so we could understand velocity and let our clients know accordingly.

meenakshi-dhanani commented 2 years ago

https://www.thoughtworks.com/radar/techniques/four-key-metrics - 2019, 2021, 2022 March is the latest update. All of which suggest ADOPT ing DORA is a recommendation. These recommendations are based on what is observed across multiple teams/clients/projects at ThoughtWorks.

This research and its statistical analysis have shown a clear link between high-delivery performance and these metrics; they provide a great leading indicator for how a delivery organization as a whole is doing.

We're still big proponents of these metrics, but we've also learned some lessons. We're still observing misguided approaches with tools that help teams measure these metrics based purely on their continuous delivery (CD) pipelines. In particular when it comes to the stability metrics (MTTR and change fail percentage), CD pipeline data alone doesn't provide enough information to determine what a deployment failure with real user impact is. Stability metrics only make sense if they include data about real incidents that degrade service for the users.

We recommend always to keep in mind the ultimate intention behind a measurement and use it to reflect and learn. For example, before spending weeks building up sophisticated dashboard tooling, consider just regularly taking the DORA quick check in team retrospectives. This gives the team the opportunity to reflect on which capabilities they could work on to improve their metrics, which can be much more effective than overdetailed out-of-the-box tooling. Keep in mind that these four key metrics originated out of the organization-level research of high-performing teams, and the use of these metrics at a team level should be a way to reflect on their own behaviors, not just another set of metrics to add to the dashboard.

prempatel12 commented 2 years ago

Hi everyone!

My name is Prem Patel and Meena recommended that I comment here some metrics to look at to better measure team performance.

Screen Shot 2022-05-17 at 10 53 35 AM

The founders I work for are also actively building an analytics dashboards that gives engineering teams better visibility into these metrics, and ties in all your various data sources.

I'm trying to get feedback from people, and would love if you all would check out this interactive demo of Okay (https://app-us1b.getreprise.com/launch/4yjvGyw/).

Thanks

kinlane commented 2 years ago

@prempatel12 -- this is AWESOME. Such a real-world list! Really appreciate you sharing.

prempatel12 commented 2 years ago

Absolutely!

And would love it if y'all could also check out the interactive demo link above. It would be super helpful to relay any feedback you all might have to our co-founders on the product.

The product in a nutshell is a engineering productivity analytics tool. The metrics I shared are an example of what we collect to display for engineering teams.