Open bensheldon opened 11 years ago
Why? I see the value for other folks thinking about similar projects, but I don't see why this would be of interest to most users.
This is maybe a combo of 2 patterns:
When I talk about "having metrics" I really mean "going through a process in which you define success, put in place a system for measuring your progress towards that success, and regularly review both the numbers and how they align with your (possibly evolving) definition of success"). Which is a bit more complicated/practical than just slapping on an analytics service that's designed primarily for web publishing and CPC advertising without any further reflection or customization.
I think a big challenge with this pattern is that we're so accustomed to big splash launches and hyperbolic impact statements that honestly saying "only 5% of visitors [make it through the funnel]" seems kind've lame... Whereas it enables the real meaty question "so what's your strategy for getting to 10%?" Not to mention what's the baseline? (Ooh, baseline!)
I see the value for other folks thinking about similar projects, but I don't see why this would be of interest to most users.
You're exactly right. But this is also a difference between a pattern library and a list of so-called "best practices": not every pattern has to be used, nor must a pattern be appropriate in every situation. Which should be mentioned in #5
I really appreciate your argument of thinking about metrics can shape the design. It's similar, in my mind, to thinking though a business model. I'm convinced!
However it seems like this kind of reporting should usually be pretty unobstrusive -- just big enough for you to succeed or fail publicly, and no bigger. If I'm understanding you correctly, the main benefit for this pattern is as a goad for the developers to think.
On Sun, Jan 13, 2013 at 9:36 PM, Ben Sheldon notifications@github.comwrote:
This is maybe a combo of 2 patterns:
- Make it public to keep yourself accountable; and
- Building your tool around a core of metrics collection as opposed to bolting them on at the end
When I talk about "having metrics" I really mean "going through a process in which you define success, put in place a system for measuring your progress towards that success, and regularly review both the numbers and how they align with your (possibly evolving) definition of success"). Which is a bit more complicated/practical than just slapping on an analytics service that's designed primarily for web publishing and CPC advertising without any further reflection or customization.
I think a big challenge with this pattern is that we're so accustomed to big splash launches and hyperbolic impact statements that honestly saying "only 5% of visitors [make it through the funnel]" seems kind've lame... Whereas it enables the real meaty question "so what's your strategy for getting to 10%?" Not to mention what's the baseline? (Ooh, baseline!)
I see the value for other folks thinking about similar projects, but I don't see why this would be of interest to most users.
You're exactly right. But this is also a difference between a pattern library and a list of so-called "best practices": not every pattern has to be used, nor must a pattern be appropriate in every situation. Which should be mentioned in #5https://github.com/codeforamerica/civic-tech-patterns/issues/5
— Reply to this email directly or view it on GitHubhttps://github.com/codeforamerica/civic-tech-patterns/issues/9#issuecomment-12205917.
773.888.2718 2231 N. Monticello Ave Chicago, IL 60647
this kind of reporting should usually be pretty unobstrusive -- just big enough for you to succeed or fail publicly, and no bigger.
Yes! I think one of the challenges of moving processes online is that failure can be invisible: things can keep humming along even if no one is using it or continuing to make improvements (maybe we need a "Proof of Life" pattern)
Also, I think there is a big distinction between "Investigative Reporting" where you're helping people explore data, and applications that are trying to produce a certain action or outcome (beyond maybe "literacy" of the data or the process that created it). I'm kind've biased towards the latter (though it is a continuum), but I think it's important that if people are trying for the latter they think through what exactly the outcome is they want and put in place a system to determine whether they got it.
For example, with CPSTiers, I'd love to see some sort of "Was this helpful? Do you better understand the Tier system? Do you feel more confident in the school selection process?" follow-up survey... though also recognizing that it could impact the experience you're going for (people may not want to use an app with low self-esteem that needs constant validation).
I think it simply comes down to: A) every project should have core metrics, B) there's no good reason to put constraints on their accessibility and C) publishing them often has more benefits than costs. That said, patterns are not all equal and one must prioritize. Perhaps you shouldn't go out of your way to publish metrics very few care about if it gets in the way of more important objectives.
Have your core metrics (which you have, right?) automatically displayed on the website. "Served 8 people already today, Up 13% from 1 month ago", "15 people started this process today; 6 people finished it". Etc.
...don't make me have to build another Open311Status for your API.