Institutions and organisations (companies, tenure committees, funding agencies) need to justify their decisions to people who do not have domain expertise. To this end, these groups rely on metrics to concisely convey the importance of different features of their decisions to non-experts.
Metrics came up in the breakout discussions around academic credit and ensuring that industry/academia understand the value of open source software (it likely came up in other groups too). This applies both to valuing open source projects themselves, determining the value of individual contributors, and determining the value of individual contributions (including open issues).
One of the suggestions that came out of those discussions is the idea that URSSI should directly support original research in software metrics via something like "URSSI labs" (thx @karthik for the name). However, the other discussions did not have time to explore into the ways in which we might organise these research efforts, what the research priorities should be. This group would be focused on exploring those issues:
Some specific topics potentially worth discussing:
alternatives to papers & citations (in academia)
different metric scales:
project importance
contributor importance
individual contribution importance
different areas of concern
sustainability
diversity
evaluating communication approaches & media
different metric units:
monetary value
temporal value (both as a cost and as a reward)
measuring non-coding contributions
managers of those who write code
designers
technical writers & science communicators
strategies people embedded in the current systems to trust new metrics
who?
tenure committees
industry managers & accounting departments
how?
metrics that connect directly to units stakeholders care about (money, time, stability, &c.)
Discussion topic: Researching software metrics
Brief description of issue/challenge:
Institutions and organisations (companies, tenure committees, funding agencies) need to justify their decisions to people who do not have domain expertise. To this end, these groups rely on metrics to concisely convey the importance of different features of their decisions to non-experts.
Metrics came up in the breakout discussions around academic credit and ensuring that industry/academia understand the value of open source software (it likely came up in other groups too). This applies both to valuing open source projects themselves, determining the value of individual contributors, and determining the value of individual contributions (including open issues).
One of the suggestions that came out of those discussions is the idea that URSSI should directly support original research in software metrics via something like "URSSI labs" (thx @karthik for the name). However, the other discussions did not have time to explore into the ways in which we might organise these research efforts, what the research priorities should be. This group would be focused on exploring those issues:
Some specific topics potentially worth discussing:
Lead/moderator: Links to resources: