chaoss / wg-ospo

MIT License
6 stars 1 forks source link

Proposed Metric: Business Readiness Rating for Open Source #3

Open vinodkahuja opened 4 years ago

vinodkahuja commented 4 years ago

Similar to SCMS, I came across the Business Readiness Rating metric for open source software. It was introduced by Dr. Tony Wasserman. It is a composite metric that comprises many individual metrics that we are developing in various working groups. This metric will help companies to gauge the readiness of open source software for commercial adoption.

A quick glance of various metrics used in this single composite metric

image

image

I don't know any implementation but I found the original document of metric from the internet archive. https://web.archive.org/web/20060426224053/http://www.openbrr.org/docs/BRR_whitepaper_2005RFC1.pdf

A modified version of same: https://www.researchgate.net/profile/Wolfgang_Leister/publication/264918873_INF5780_Compendium_Autumn_2014_Open_Source_Open_Collaboration_and_Innovation/links/53f5e5340cf2fceacc6f7a60/INF5780-Compendium-Autumn-2014-Open-Source-Open-Collaboration-and-Innovation.pdf?_sg%5B0%5D=Kn2m66CgXyvCM_VY_LfcX1CCdhkMWduQCzrDQHnTx3pvgrNDq7CEeH5aUx8N29QdVGzkDtYo30nYiPuW7Sb7DA.9PEGmwvfHIh8Jt4NqSHrPleTlKUeoGCLCD-sSjpc4urpZ5CiCB7zC_cabWsS7coMhZEQcKbqimFWN2Gi-4QrAA&_sg%5B1%5D=bF3aN-aKFC-vixP8n7npYEuc9sV5TCn3Mgt6OSyjGqxmCNP74HN79sMkj_BaySVlsgSfY2RtX7UfeD74DktzRSerGNfio7stqVkJAsMHs9wI.9PEGmwvfHIh8Jt4NqSHrPleTlKUeoGCLCD-sSjpc4urpZ5CiCB7zC_cabWsS7coMhZEQcKbqimFWN2Gi-4QrAA&_iepl=

The intention is it goes into Organizational Value group.

mbbroberg commented 4 years ago

@vinodkahuja thank you for sharing this! Let's dig into this in the coming weeks.

DuaneOBrien commented 4 years ago

I'm interested in digging into this topic as well. Has anyone engaged with the authors of either paper?

Looking at the metric through the lens of supporting/maintaining your dependencies, something I've been thinking about for a while now is the idea of defining an approximate "Best Alternative Cost" for a given dependency. I think that the Business Readiness Rating captures enough of what I was chasing that it may not make sense to do something different.

My thinking here is that if there was a programmatic way to get at an approximate Business Readiness Rating for a given list of dependencies, you could inform several decisions around both adoption and investment.

The next thing I'd want to chase down is "What are the alternatives to this dependency?" I think we could get at that by looking at community trends and data from the package ecosystems, but I'm not aware currently of tooling that would answer the question.

However, the end state would be a report that shows Business Readiness Rating for my dependencies next to the best Business Readiness Rating for an alternative to the dependency. With that report, I could look for opportunities to move to better dependencies and do some analysis on the cost of switching. And if there doesn't appear to be any viable alternative for a dependency, that's strong signal that we should participate in maintenance activities.

germonprez commented 4 years ago

Thanks @DuaneOBrien I'm going to tag @sgoggins on this one as he's been digging around the dependency side of things for a while now. I know that this (dependencies) is really something we'd like to focus on.

sgoggins commented 4 years ago

@DuaneOBrien : tl;dr -- your question taps a rich vein of discussion, and I am going to lay out how I see the various dimensions of understanding and managing dependencies in software, particularly open source software.

Hi @DuaneOBrien : Dependencies, as you likely know, have more than one operationalization in open source software. The key ones we are working on making visible first through CHAOSS metric(s) and then tooling are:

  1. Package manager dependencies (what libraries.io used to do). Here, we understand the problem and did some detailed analysis of it before libraries.io stopped providing a free service. I do maintain a "hostile fork" of the code from 18 months ago which may serve as a starting point for our tooling.
  2. Operating system dependencies: These are almost exclusively "runtime dependencies that require software to be installed at the OS level. gcc is a classic example, but other Linux distro specific tools also inject dependencies.
  3. What I call, for lack of a more clever word, "import dependencies". These are common in older, Java language systems and take both development and runtime forms. The runtime form of these types of dependencies are where we have seen security risks like the struts version non-maintenance at Equifax a number of years ago.

There are likely other categories we have yet to parse out.

It particularly important, I think, to recognize dependencies, IMHO, pose three distinct types of risks to open source software:

  1. Runtime risks: Which take two forms. In publicly accessible applications, these are often manifest as security vulnerabilities. Runtime risks also exist in real time operating systems as **safety risks", because its important that all of the open source software in your car, for example, be at the exact version that went through full path testing.
  2. Development Risks: Most developers will recognize these dependencies as the ones that break their code if they are not locking the dependency versions during development and testing. In my own team's case, the rapid development of Bokeh routinely breaks something we already had working.
  3. "Girth Risks" : Put simply, the more dependencies a project has, the more opportunity it takes on for having somebody else's code break their application.

Depending on the application, and the particular business concern, or business readiness rating you would like to see developed, its likely all three of these "dependency risk types" play a role.

You suggest "better dependencies" and evaluating the cost of code switching. In the first case, I think the evolution metrics we already have will help you to identify the most mature and stable projects, given a set of choices. In the second case, evaluating the cost of code switching is going to depend on where in your stack you are considering making a change. Assuming a different library in the same language, I think one could develop a realistic estimate of the cost of reducing dependencies, or changing one dependency for another because its a "better bet". In the case of Front end development, you are likely addressing different javascript libraries. In this case, my own experience is threefold:

  1. Javascript is expensive to maintain no matter what, therefore
  2. The nominal cost of changes these libraries for a small to medium sized application is not outrageous compared to the cost of simply maintaining what you have (though in general I would caution against wholesale changes to the front end of any customer facing site), and
  3. Typescript based libraries are overall less expensive to maintain because, by supporting explicit data typing, they make debugging substantially easier.

Sean

DuaneOBrien commented 4 years ago

Thanks @sgoggins - lots in here, as you say.

I think it's helpful to say that most of what I'm personally after is squarely in the package manager dependencies category, as that's closer to my own needs. In that vein, I'd encourage you to take a look at https://github.com/depscloud/depscloud if you haven't already. There's a lot of "inspired by libraries.io" in there. I can connect you with the main developer if you're interested in learning more.

As for the cost of switching, it's worth calling out that this is very context-dependent. If your organization doesn't have the infrastructure to support the Next Best Alternative, the cost will be much higher for you. I envision something prescriptive for measuring readiness, but something descriptive for measuring the cost of switching.

This gets into a much bigger question than package manager dependencies, but tracking readiness ratings of projects and alternatives over time could also provide firmer data behind adoption trends than we typically get. This could help us with early identification of projects that are in decline or which are being replaced by newer, better technology. There's no substitute for being connected to the broader ecosystem, but having some data to back up observations would be useful when you're having discussions about technology investment in an organization.

mbbroberg commented 4 years ago

Takeaway from discussion in a Value WG: @vinodkahuja to reach out to the author and gauge relevancy today.

sgoggins commented 4 years ago

@DuaneOBrien : That sounds really interesting. I looked over the repository, and have more questions about how that project might be something we can use to store data in an Augur database so we can integrate these metrics with others.

vinodkahuja commented 3 years ago

I got feedback from Hi Dr. Wasserman about the current status of this work.

"The key thing to know about OpenBRR is that we stopped working on it about 10 years ago. The work was replaced by OSSpal about 10 years ago. Check out osspal.org. I've also attached a paper on OSSpal that was published in the 2017 Int'l Conf on Open Source Systems. There's some overlap between the projects. I'd love to add hundreds of projects to OSSpal, but that takes people to contribute to the site, and we haven't had enough funding to make progress on that, so we only have a few hundred projects there. But that should give you some sense of what we are trying to do."

ElizabethN commented 2 years ago

@vinodkahuja can we move this to the Metrics Models Working Group?

vinodkahuja commented 2 years ago

@ElizabethN Yes!

tmcdowell-rs commented 2 years ago

After reviewing the details of both the BRR methodology as well as the OSSpal methodology/tool, I think there is great value for businesses but strictly limited to those for "overall suitability" presented in the papers. For us as a company building our end-product on FOSS components and committed to (strategically) contributing back to the FOSS projects we depend on, it is equally important to identify and surface those areas of a component that are deemed as "weak" so that we can focus our resources in those areas.

sgoggins commented 2 years ago

As a company that is open source looks at it, the question is, where do we add value?

If the code is fine, but the documentation sucks, then we would focus resources on documentation. Where do these projects need TLC?

(Value meeting, 8/11/2022)

GaryPWhite commented 1 year ago

Dropping a note in after our discussion -- would "Durability" or "adoptability" fit in a naming convention?

vinodkahuja commented 1 year ago

@GaryPWhite I think adaptability is more suitable than durability, as the metric model is more focused on whether an organization can adopt the open source project/software and see if open source software/project is ready and meet the criteria mentioned in the model