Closed mbbroberg closed 4 years ago
Could be something that we consider in the Objectives in the development of the metrics?
Or are you thinking about meta-categories to frame things?
Honestly,
I think this is the right way to go and super good but I worry about how you define and sort the metrics into the proposed categories. We should adopt these delineations in my opinion, but I'd like to expand on how we are doing it because I'm worried about a common categorization pitfall people run in to in community measurement.
I also want to say though that my opinion is a bit slippery. I'm having trouble filtering my thought process down. Even after a few hours of thinking about this I'm having trouble putting words to my concern. This ultimately may not matter, you may have thought of this already, this is hard to explain off the cuff, and I'm having trouble boiling my concern into a statement. But I'm taking a stab. @Dylan Marcy dylan@sociallyconstructed.online and/or @Georg Link georglink@bitergia.com may be able to help me clarify my thoughts here, but please bear with me.
Here goes: Categorizing metrics into these 4 proposed dimensions in the way that we are forces the use-case of the metric on our end and limits it for users we would deliver it to. Easy undertanding of how the metric is used is great but I have often found myself wanting to know when it should "not" be used instead. Meanwhile, as an analyst, I have frequently reframed the same metric built for a specific purpose to work as either a lag or lead metric based on the behavior it's tracking. To do this though I must know the strengths and weaknesses of the metric's "perspective".
With this proposed categorization, I think we are putting the metric in buckets starting from general and going down to specific. It's saying, "here's what this can do."
If we reverse the process and go from "here's all the areas we see this metric is strong or weak in" and grade it across all our proposed categories, I feel it creates a stronger and more flexible presentation of the metric that doesn't necessarily pigeonhole it. We can do this by making this distinction between external and internal one dimension in a "rubric" of dialectical tensions. To illustrate how this would look I've provided 2 examples below. I feel framing it this way will provide a solid and specific spread of what the metric is good or bad at and goes from highly specific strengths or weaknesses to general use cases. It frames the thought process from convergent thinking to divergent thinking for the user, while giving them a clear picture of what the metric can and cannot do. This will provide some flexibility, knowledge, and opportunity to use the metric in different ways for users, and allow the metric to breathe.
It also has the benefit of generating podcast fodder in my opinion.
Here's how I see the dimensions working. Create a group of different "tensions" and rate the metric on a scale of 1-7 or -3 to +3 with a 0. Rank the metric in each category regardless of whether you think it's appliable and use at least 5 dimensions or your probably not exploring the metric enough. In matrixes that observe communities you always ALWAYS stay above 3 tensions because if you only have 3 you're not viewing the subject or test-site in enough perspectives. That creates both too much wiggle room and skips "hidden advantages and limitations". Then your classification means very little since the "C" in the rule of generalization wasn't there (a concept applies to b population only so far as c limitation).
Examples: To make it easier to see what I was talking about I used to examples to show what I would make the matrix look like:
Metric: time to respond to new posts (external in your definition) Theory: A community focused on supporting or providing a product/service must answer questions from lay-users whose questions are weighted toward the beginning of their community involvement. This crucial moment requires time to resolution for issues and has significant sway on how long they stick around. Definition: This metric divides the number of new posts created by the number of posts with comments responded to in 1 hour, 6 hours, 12 hours, 24 hours, and 48 hours. It provides a graph of the community's time to respond. where the fewer hours is better and more hours is worse. Use case: determine whether your community is providing support faster or slower than your team and how active veteran members are with new-coming questions. expected time commitment: 0 per week Strengths / Weaknesses of this metric: [this is your classification's part] Value to leaders 1 2 3 4 5 6 7 Value to 3rd parties Looks at the past 1 2 3 4 5 6 7 Suggests future trends Insightful 1 2 3 4 5 6 7 Actionable covertly impactful 1 2 3 4 5 6 7 Overtly impactful Surface level 1 2 3 4 5 6 7 Deeply descriptive easy to track 1 2 3 4 5 6 7 Difficult to track Invasive to privacy 1 2 3 4 5 6 7 non-invasive low-resolution 1 2 3 4 5 6 7 high-resolution easy to upkeep 1 2 3 4 5 6 7 Difficult to manage
Metric: Net promotor Score (internal in your definition) Theory: pulse-checking user's emotions and feelings about a brand is difficult when done at regular but rare intervals such as surveys and does not capture a large swath of people - the only survey responses you get are from people who take surveys or care about you already. By reducing the barrier, focusing on emotion, making it easier to respond, and asking at interactions you can get a more objective and higher resolution picture of responses that is more consistent and up to date. Definition: A 2 question survey asked at several points across the customer value journey to gauge and compare a person's level of preference for a brand overall, after individual interactions, and in dealing with representatives. Use Case: Asked at each point of contact with the leaders and organization itself, the NPS gauges emotional and visceral level of satisfaction with the brand. Usually used to gauge how well interaction goes. expected time commitment: 2 hours/week Strengths / Weaknesses of this metric: value to leaders 1 2 3 4 5 6 7 Value to 3rd parties Looks at the past 1 2 3 4 5 6 7 Suggests future trends Insightful 1 2 3 4 5 6 7 Actionable covertly impactful 1 2 3 4 5 6 7 Overtly impactful Surface level 1 2 3 4 5 6 7 Deeply descriptive easy to track 1 2 3 4 5 6 7 Difficult to track Invasive to privacy 1 2 3 4 5 6 7 non-invasive low-resolution 1 2 3 4 5 6 7 high-resolution easy to upkeep 1 2 3 4 5 6 7 Difficult to manage
I really hope this made sense XD
Samantha Venia Logan Co-Founder of SociallyConstructed.Online 307-274-5516 | samantha@SociallyConstructed.Online 6715 Autumn Ridge Dr. Unit 2, Fort Collins CO., 80525
Call me for a free marketing consultation! 307-274-5516
On Thu, Apr 9, 2020 at 9:57 AM Matt Germonprez notifications@github.com wrote:
Could be something that we consider in the Objectives in the development of the metrics?
Or are you thinking about meta-categories to frame things?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/chaoss/wg-value/issues/78#issuecomment-611606243, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANBRVR3LZQWT2NF5TDYKWWDRLXV5FANCNFSM4MDTNVSA .
I understand the distinction @mbbroberg is making regarding the usefulness of a metric for use to manage a community (internal value) and for securing funds (external value). I like the suggestion of @samanthavenialogan to not seem them as exclusionary because the same metric can have internal and external value but rather to have a rubric of characteristics we grade metrics on (a radar chart comes to mind). This rubric can then help someone search for metrics based on desired characteristics (e.g., I'm looking for a metric that helps me do x and has characteristics y).
I wonder whether we can develop such characteristics that apply not only to Value metrics but all CHAOSS metrics and have that be part of the next big metric release.
@samanthavenialogan your approach definitely inspires looking at a self-assessment of value to leadership and value to individuals. Self assessment may be the best we can get while keeping this to a reasonably easy-to-comprehend specification.
@GeorgLink I'm with you on a radar chart being of use, but likely beyond the scope of Value WG itself.
In short, I don't see Internal and External being the right abstraction so I'll close this issue. If someone has another proposal for focus areas -- if they are even necessary -- then a separate issue can refer back to this one.
Thanks for weighing in everyone.
I've been ruminating on the ways to frame value that capture the substance of discussion in #74 and the clarity of our agreed mission to center on metrics that result in funding.
One option I'd like to prose is what I call "Internal" and "External" value. (I am completely open to a different naming convention). The idea is this: there are distinct objectives to metrics as we measure communities, and I found there needs to be healthy mix of 2 major (and 4 minor) categories for me "get it right" in practice.
Internal metrics feed into a community strategy, which can give them value when agreed upon as valuable by an executive sponsor or organization. But internal metrics do not translate to funding in and of themselves. I consider SCMS as an example of this one: it can be used toward an improvement, but social listening without a strategy is not going to be seen as valuable. Internal metrics can be powerful and complex metrics, but they are still raw material in the perception of value.
On the other hand, there are external metrics. These are measurements designed and defined to articulate a complete idea of value. While these may not be universally of value, they are complete in scope. One example, in #77, is Share of Voice. While far less sophisticated than SCMS, SoV is a complete value argument that positions one investment as a relative ROI to one or more in the same space.
My intention by presenting this idea of Internal and External (again, open to better terminology here) is to highlight the gap in the latter. I have yet to meet a community team of any kind that doesn't use Internal metrics, yet so many don't even consider External until they're in risk of being laid off.
I'd appreciate thoughts, feedback, and wordsmithing.