codecov / engineering-team

This is a general repo to use with GH Projects
1 stars 1 forks source link

Org analytics foundations #1108

Open codecovdesign opened 7 months ago

codecovdesign commented 7 months ago

Problem Statement

The Analytics page at the organization level, has been facing functionality issues and user confusion since the initial iteration which had minimal changes since mvp and requires a thorough reassessment. Example the page, showing maximum data instead of average, may not be providing the intended insights, leading to confusion about its value. Furthermore, the page is not meeting customer expectations as promised in our sales.

While we have identified ideal short-term solutions (Roadmap Issue #28), a deeper understanding of the foundational principles behind this feature could be helpful to ensure long-term effectiveness and alignment with user/business needs/goals.

Proposed Solution: Rediscovering Foundations

Goal: to build consensus and reach alignment on Analytics as a team, so that we can understand how to best support and prioritize this feature moving forward. Let's investigation the foundational / first-principle aspects of the 'Analytics' section:

  1. Understanding the Problem to Solve:

    • What specific problem(s) does the 'Analytics' page aim to solve for our users?
      • A wide view of coverage across different repositories; whereas coverage data and reports are provided at the individual repo level.
      • Understanding at an organizational level if the org is meetings it's engineering testing/coverage goals. ()
  2. Identifying the Target Users:

    • Who are the expected users of this page? What are their roles and needs?
      • Engineering lead/manager? Engineering director? CTO?
  3. Aligning with Business Goals:

    • How does this page contribute to our overall business objectives?
  4. Defining User Goals:

    • What are the primary goals users aim to achieve with these analytics?
      • When <situation>, I want to <motivation>, so I can expected outcome <expected outcome>
  5. Clarifying How It's Being Sold:

    • In what ways is this feature presented or sold to customers?
  6. Setting Customer Expectations:

    • What expectations do customers have regarding the analytics provided?
  7. Larger Direction and Strategy:

    • How does this feature fit into the larger product direction and strategy?

Action Items

Additional Notes

jerrodcodecov commented 7 months ago

Understanding the Problem to Solve:

  1. What specific problem(s) does the 'Analytics' page aim to solve for our users? -- Agreed with your write-up.

-- I'd additionally add the ability for microservices customers to have a meaningful view of "project coverage" as a repo is probably too small to represent a "project"

  1. Who are the expected users of this page? What are their roles and needs? -- For orgs with large repos, it would be org leaders (such as directors and executives) -- For orgs withs microservices, it would also include eng managers or even individual users

  2. Aligning with Business Goals:

How does this page contribute to our overall business objectives? -- We sell and differentiate well in monorepo customers with tools like flags and components -- We do not differentiate well against the OSS or paid competitors in orgs with small repos. Most other paid competitors also have a page like this so is considered "table stakes"

  1. What are the primary goals users aim to achieve with these analytics?

Understand coverage on a set of repos or more than one repo. Eventually it would be a goal to show flag and component coverage across multiple repos.

  1. Clarifying How It's Being Sold. In what ways is this feature presented or sold to customers?

-- A key tool for "Pro" tier or greater.

  1. Setting Customer Expectations. What expectations do customers have regarding the analytics provided?

-- It is demo'd in every sales pitch as a way to analyze coverage data. Therefore, customers expect it to work and to show overall weighted average project of the repo

  1. Larger Direction and Strategy:

How does this feature fit into the larger product direction and strategy?

Day 0: Don't show misleading data to users Long term: We need to differentiate in microservices customers more vs. only monorepos.

aj-codecov commented 7 months ago

https://github.com/codecov/roadmap/issues/28 Breakdown of current functionality and problems @codecovdesign

codecovdesign commented 7 months ago

Interview with Support Team, Friday 2/2/24

  1. Understanding the Problem to Solve:

    • What specific problem(s) does the 'Analytics' page aim to solve for our users?
      • Joe: Managers are generally using to see trends for mulitiple repos at one time. Whether they are trending or increasing; or even just to the metrics of multiple repos. Consider there are core services in each repo, without having to see them individually, it's nice to see it at the same. Note: this is an internal hypothesis . I'd love to see a muli line graph.
      • Vlad: I'm not sure if we'd heard much about external customer of this problem. +1 for problem outlined by Joe.
  2. Identifying the Target Users:

    • Who are the expected users of this page? What are their roles and needs?
      • Joe/Vlad: team leads and engineering managers; the individual developer isn't going to care about this.
        • Discussion group: from flaky test interview, we heard there from higher contributors had interests in information about failing tests gaps. In this scenario, this would be focused on serving an individual developer, not higher level manager. Not sure that in this scenario an engineering manager would care.
  3. Aligning with Business Goals:

    • How does this page contribute to our overall business objectives?
  4. Defining User Goals:

    • What are the primary goals users aim to achieve with these analytics?
      • When <situation>, I want to <motivation>, so I can <expected outcome>
        • when I want to figure out codecov value, I want to know if coverage metrics are improving, so I can decide whether it's helping us improve our teams code quality.
  5. Clarifying How It's Being Sold:

    • In what ways is this feature presented or sold to customers?
    • Vlad: typically we just show the page exists, though I haven't had a single customer ask about this. However, they do become interested since microservices are related. It's shown and little discussion around it happens nor are there any follow up typically.
  6. Setting Customer Expectations:

  7. Larger Direction and Strategy:

    • How does this feature fit into the larger product direction and strategy?
    • Vlad: removing it is worth considering to focus on other/higher priority items
    • Is there other cross repo visibility is helpful?
      • Vlad: cross flags visibility might be helpful. Joe: Could be other things completely unrelated api issues / errors dashboard.
      • Joe: come back down to trending and visualling if it is and if it's not what can I drill down on to get it fixed.
      • other: what are the statistics usage, 3 tickets out of 1000s is some signal of low usage
codecovdesign commented 7 months ago

Feedback from Sentry team strategy, related to product equivalent and their direction in handling it:

codecovdesign commented 7 months ago

Interview with Eli/Jerrod, Tuesday 2/6/24

  1. Understanding the Problem to Solve:

    • What specific problem(s) does the 'Analytics' page aim to solve for our users? (consider if the existing solution didn't exist)
      • Eli: team leads / managers pain point of top down code quality mandates. From this perspectives, they wanted to understand coverage across the repo. Bottom up pov, individuals that are complying want a way to prove it.
      • Jerrod: there is a competitive aspect to other platforms having this. Additonally, microservices customers looking at one repo isn't satisfactory, but need to see across.
      • Customer context: there was a scenario where data was sent to data dog to view this data.
  2. Identifying the Target Users:

    • Who are the expected users of this page? What are their roles and needs?
      • Jerrod: For orgs with large repos, it would be org leaders (such as directors and executives). For orgs withs microservices, it would also include eng managers or even individual users.
      • Eli: it's been unclear for us about the "manager" and related traffic coming to analytics.
      • Jerrod: consider customer feedback "we need to show project coverage overtime". It's helpful to have something to show to stakeholders.
      • We may not use analytics,
      • Another: customer built data dog alerts when coverage dropped by a certain level đź’ˇ
        • separate was working with temporo.io to create a simliar workflow
  3. Aligning with Business Goals:

    • How does this page contribute to our overall business objectives?
      • Jerrod: We sell and differentiate well in monorepo customers with tools like flags and components. We do not differentiate well against the OSS or paid competitors in orgs with small repos. Most other paid competitors also have a page like this so is considered "table stakes"
      • Eli: it moves outside the focus on software developer. question is can it serve 2 personas? Outside of this, not sure what else we have that targets the persona.
  4. Defining User Goals:

    • What are the primary goals users aim to achieve with these analytics?
      • When <situation>, I want to <motivation>, so I can <expected outcome>
        • Developer: When I'm responsible for a specific component across repos, I want to views these flags and/or components, so I can the testing across repos
          • microservices case that comes of with some frequency, could be still more manager focused.
          • Other: seeing bundle size across repos? still maybe a manager concern
        • Manager: When I have code quality standards, I want to be able to view data, so I can see quality is improving
        • mobile manager android/ios running apps, they don't see results usually. but it's novel to see them side by side.
          • could be goal oriented for the developer?
            • another: where should teams spend time as it relates to quality. bundles/flaky-tests could be baked into this; where something could help them direct/prioritize efforts.
  5. Clarifying How It's Being Sold:

    • In what ways is this feature presented or sold to customers?
    • Jerrod: A key tool for "Pro" tier or greater.
  6. Setting Customer Expectations:

    • What expectations do customers have regarding the analytics provided? Who are these customers?
      • Jerrod: It is demo'd in every sales pitch as a way to analyze coverage data. Therefore, customers expect it to work and to show overall weighted average project of the repo
      • will continue to be used in demos
  7. Larger Direction and Strategy:

    • How does this feature fit into the larger product direction and strategy? -Jerrod: Day 0: Don't show misleading data to users. Long term: We need to differentiate in microservices customers more vs. only monorepos.
    • Our focus is evolving to more self serve (developer) vs former enterprise (manager+). Since analytics already exists, it's challenging to explain why this would be deprecated.
    • Design challenge: how do we bring value to both, where seeing broader data
      • wild and crazy ideas?
codecovdesign commented 7 months ago

Interview with PM, Tuesday 2/6/24

  1. Understanding the Problem to Solve:

    • What specific problem(s) does the 'Analytics' page aim to solve for our users? (consider if the existing solution didn't exist)
      • The pain point of not seeing how coverage is change across multiple repos over time.
      • Rohan: there is an assumption internally we don't have much confidence in this metric of seeing things overtime across repo. Is project coverage over time really valuable as a headline page?
        • context: in looking at codecov my understanding is that it wanted to help with quality of code. project coverage happens to be a good anchor and this seems true for some of our base. However, we are getting another sentiment that project coverage is meaningless; what we are showing as a metric I'm unsure if we can show this as a metric that ppl can aspire to.
        • Adalene: considering a team culture there could be scenarios project is others problem vs patch.
        • AJ: the true incentive is pushing code; Rohan: alternative is tests not failing in PR / good CI is a factor
  2. Identifying the Target Users:

    • Who are the expected users of this page? What are their roles and needs?
      • Without saying manager, team lead, or CTO – it's anyone whos interested/cares about coverage in more than one repo. Today, it has 10% of pull page in terms of visibility fwiw.
      • Rohan: when we are talking about analytics, do similar assumptions connect to org > repo view
        • repo page has actionable items whereas the analytics page is not actionable
        • rohan:
  3. Aligning with Business Goals:

    • How does this page contribute to our overall business objectives?
  4. Defining User Goals:

    • What are the primary goals users aim to achieve with these analytics?
      • When <situation>, I want to <motivation>, so I can <expected outcome>
        • developer: When evaluating a dependency, I want to I want to see if idea is reliable, so I can merge/use the library
          • this is as the PR level when developer introduces the dependency, at the org level the issue is more about compliance and dependency management
          • rohan:
        • manager: When I am conducting a post mortem on incident, I want to understand what lines had caused incident, so I can action appropriately
          • is this an org problem?
  5. Clarifying How It's Being Sold:

    • In what ways is this feature presented or sold to customers?
    • ...
  6. Setting Customer Expectations:

    • What expectations do customers have regarding the analytics provided? Who are these customers? -...
  7. Larger Direction and Strategy:

    • How does this feature fit into the larger product direction and strategy?
      • it's not useful until this page can be made more actionable
codecovdesign commented 7 months ago

Interview with Sales Team, Thursday 2/8/24

  1. Understanding the Problem to Solve:

    • What specific problem(s) does the 'Analytics' page aim to solve for our users?
      • Heather: for a number of champions we speak to where they are looking to get a look at a higher level how the team is doing, not just 1 for 1 repo level to hold their orgs accountable. they are not finding what we have today as adequate as it's not showing the average. could create an issue for champions who already see it and expecting to see.
      • RJ: not a lot of ppl in discovery asking for how the org is doing. mostly, about how they want to make a standard for developers. However, when we show it in demos there is a sentiment that it's nice. They'd like to track it specific to a team and track how coverage is looking over time. Suggestion is to actually bookmark (as a way to have a saved view)
      • Sabiha: what's important to this persona is visibility and developer productivity. IT always comes back to this; as a proxy of this it's being able to bring back developer error fixes and downtime errors resolved. A lot of managers come to use stating they are using different tools so being able to streamline. On the other hand, a lot of times we hear explicitliy that it's for the team and something helpful to them.
      • Zach: specific to eng managers, it's more attractive in scenarios where they are responsible for multiple repositories. What is the feedback? they'd like a saved view of repo they manage, rather than saving a hyperlink. What are their motivations? need to shift left with code quality, there isn't really a blocker for not having features that benefit them directly. They focus is it's a developer focused tool.
      • What might be a way to show ROI and/or help them see results:
        • Zach: They want to make sure their codebase doesn't regress. are we showing the metrics they want to see. Not sure what metrics they want to see and at what format.
        • During renewals, what data is helpful to them?
          • ToDo that is a heather and vlad question
          • Overall, trend in coverage and that the OKR is being met
            • Developer sentiment
            • Zach: haven't lost deals for it not doing what they expect. we've lost deals due to pricing.
            • However, lost renewal deals due to low value after a year.
            • RJ: if not price point, it's more about static analysis and/or they think we are running the test occasionally (auto test-suite). <is it mandated?> yes, typically in those cases. another noticed some come from teams under a platform title and they're focused on how to develop the stack, where they have internal coverage tools but it's tool by tool
        • Q: Has it ever closed a deal?
          • Sabiha: yes, this has created some challenges, one example is that the fact that Codecov couldn't share this data at a higher level. could this have been a mismatch? Let's talk with neil at Sentry for his POV at bit more about what they were looking for. A major request is to be able to group coverage by team, but I know this goes against internally us wanting to show by person.
          • RJ/Zach: it's been more of a nice to have / cherry on top, but can't say it's explicitly closed a deal
          • Sabiha: tbh it's never closed a deal
            • yes, it would be disappointing if it was gone, it's a bit different from Zach catch. There is a greater need to see it. The thing is with only focus on developers, historically our success that the manager is signing on to move forward and also the manager is creating OKRs and they are motivated.
            • Q: how do we handle renewals, such as sharing data with them?
              • a lot of reasons lately is tool stack consolidation and not getting enough value or more so not seeing the results. other is issues with codecov poor reporting and they don't reach out to what is the issue. biggest churn: we find we don't get the most value and not getting what we need. with flags and components configured there is a higher retention - it could be helpful to have in-app suggestions.
              • Q: it seems we need to have metrics and/or compelling feedback to relay to the higher ups
              • đź’ˇ show ROI data without filtering
  2. Identifying the Target Users:

    • Who are the expected users of this page? What are their roles and needs?
      • ...
  3. Aligning with Business Goals:

    • How does this page contribute to our overall business objectives?
    • ...
  4. Defining User Goals:

    • What are the primary goals users aim to achieve with these analytics?
      • When <situation>, I want to <motivation>, so I can <expected outcome>
        • ...
  5. Clarifying How It's Being Sold:

    • In what ways is this feature presented or sold to customers?
    • Sabiha: RJ may talk about it briefly in a demo where maybe managers can see it by flag.
  6. Setting Customer Expectations:

    • What expectations do customers have regarding the analytics provided?
    • RJ: shows the org analytics page (after breaking things down to flags and components) and show how they can see coverage on multiple repos. States it could be a dashboard for their teams. Rarely any questions, but one time someone asked about the math of it all - wondering how the combined works.
    • how would you feel if it wasn't there / able to share?
      • RJ: i'd feel indifferent about that since it's never really too engaging
      • Heather: during the initial onboarding the analytics page isn't something we go over. not so much a question, but one customer that moving to dedicated cloud mentioned that proper metrics on that page is important. "to hold accountability" as their orgs grow.
      • q: how about renewals - whats the driver
        • a lot is from the team internally and have to do with goals and benchmarks. a lot of times they will have ways to measure areas of success.
        • I do have customers that ask about what repos are active with codecov which has been a frequent ask
  7. Larger Direction and Strategy:

    • How does this feature fit into the larger product direction and strategy?
    • Sabiha: from the sales perspective, example performance can be challenging since it may not be appealing to enterprise customer. if we only go with the developer focus we might loose some of our share; so deciding our focus and related considerations is critical.
    • What would be helpful to know if what is the reasoning for the deep focus in only in the developer only persona. Codecov feels a bit different, but to learn more. todo: a bit more insight here to learn about that.
codecovdesign commented 6 months ago

Issue update: feedback summary, themes, and potential next steps

TLDR

The Analytics page in Codecov, intended for team leads and engineering managers overseeing multiple repositories, faces mixed perceptions regarding its utility and relevance. While some see it as crucial for reviewing code quality trends across repos, others question its value. The feature's ability to align with broader business objectives and cater to both developers and managerial personas remains a topic for strategic reevaluation; emphasizing the need for actionable insights and potential shifts towards a more developer-centric offerings. Based on the feedback/discussions there are follow up discovery issues listed below for consideration, if we’d like to explore this space further.

Longer summary of team feedback _Summary below is based on interviews documented in this issue. You can see the rough notes in comment threads for each interview._ ##### What problem are we solving: The 'Analytics' page is perceived by some as a valuable tool for managers to oversee code quality trends across multiple repositories, while others question its value and relevance to users. There's a consensus on the need for better visibility and actionable insights, with suggestions for improvements like multi-line graphs and saved views for easier navigation. However, there's also a recognition, as noted by Jerrod and Heather, that competitive features and the ability to monitor multiple microservices could be helpful. ##### Who is the user this is for: The 'Analytics' page primarily targets team leads and engineering managers focusing on users who oversee multiple projects or repositories. In the leadership discussion, it outlined a user base that could include organizational leaders in large repositories and possibly individual contributors in microservice environments. Despite some uncertainty about the validity of usefulness for leaders or how wide reaching it’d be, there's a consensus that the feature caters to those invested in monitoring coverage across several repositories. This includes stakeholders looking for demonstrable metrics, as well as those setting up automated alerts for significant coverage changes, suggesting a varied audience from high-level executives to hands-on engineers seeking comprehensive coverage trends. ##### How does this page contribute to our overall business objectives? The 'Analytics' may have potential of expansion by catering to customers through expanding data related to flags and components (viewing at an org level), as noted by one, marking a point of differentiation from competitors. However, for some organizations, it's seen as "table stakes," given its presence among most competitors. Another point raised was the page's potential to serve dual personas, extending the focus beyond software developers, which presents an opportunity and a challenge in addressing broader user needs within our product offering. ##### What are the JTBDs: In exploring user goals for the 'Analytics' feature, it became evident that while developers objectives are more clear, defining concise jobs-to-be-done (goals) for organizational leaders with confidence proved challenging.The discussions revealed a gap in pinpointing specific analytics-driven goals for higher-level executives, underscoring a need for further investigation into clarifying their goals and expectations from such a feature. ##### How does it contribute to strategic direction: The integration of the 'Analytics' feature within Codecov's broader product strategy raises diverse perspectives, reflecting a potential pivot towards prioritizing developer-centric features (self serve) over managerial analytics (sales lead). Insights from some members suggested a reevaluation of the feature's relevance, with a lean towards enhancing cross-repository visibility and actionable insights for developers. Another emphasis on accurate data and a stronger differentiation in the microservices domain might align with the broader shift towards a self-serve, developer-focused model, albeit with the challenge of maintaining value for managerial users. Feedback from the Sentry team further emphasizes this direction, as their feedback highlighted a deliberate focus on the developer experience and a noted lack of demand for executive-level analytics.

Key Themes and Considerations:

Potential Actionable Steps: