ossf / wg-vulnerability-disclosures

The OpenSSF Vulnerability Disclosures Working Group seeks to help improve the overall security of the open source software ecosystem by helping mature and advocate well-managed vulnerability reporting and communication.
https://openssf.org
Apache License 2.0
176 stars 40 forks source link

MVP Vuln Disclosure Proposal - HackerOne, Github & NodeJS Ecosystem WG #18

Open bwillis opened 3 years ago

bwillis commented 3 years ago

@crystalhazen and I were discussing how we could leverage some of the things we've discussed and turn it into an MVP that we could see some value out of, the following is what we came up with.

Problem Statement

Open source projects doing vulnerability disclosure have to go through multiple steps in order to have fixed vulnerabilities surfaced to their users. Specifically, programs like nodejs-ecosystem do countless steps to triage, validate, coordinate fixes and notify their users. The primary method to notify users is by publishing CVEs. In order to publish CVEs, the process involves several additional parties to coordinate, review and publish the CVE data. The publishing of CVEs causes a combination of issues which results in less reports published (49% of the vulnerabilities publicly disclosed do not get CVE IDs) and when publishing CVE IDs it takes on weeks to months longer than it takes to publicly disclose the report. Simply put, the heavily manual CVE processes are slowing down the disclosure of vulnerabilities.

Ref data analysis

Goal

Reduce the time it takes to less than 7 days vulnerabilities disclosed on HackerOne to be surfaced by package users on Github.

Side effects

Steps

  1. Hacker on HackerOne submits report to NodeJS ecosystem program
  2. NodeJS ecosystem works to solve vulnerability
  3. NodeJS ecosystem records PURL for vulnerable & fixed package
  4. NodeJS ecosystem publicly discloses report
  5. HackerOne publishes report to Github Advisory Database
  6. Github Advisory Database does it's magic and alerts repos

Open Questions/actions

  1. HackerOne needs to support PURL in reports for both vulnerable version and resolved version - how to handle ranges for programs or on submission?
  2. What is the minimum amount of information to publish to Github Advisory Database?
  3. Can HackerOne publish data to the Github Advisory Database?
  4. Will NodeJS ecosystem be the guinea pigs for this?
  5. Will the data be clean enough? What other fields do we require? Do we need to support the full Unified list of metadata for vulnerability reports and disclosures?

References

Github Security Advisory Form for Repos

HackerOne CVE Request Form for Programs

HackerOne Report Meta Information


This was migrated from our old repository and there was a conversation thread. You can see it all here: MVP Vuln Disclosure Proposal - HackerOne, Github & NodeJS Ecosystem WG · Issue #11 · Open-Source-Security-Coalition_Vulnerability-Disclosures-.pdf

bwillis commented 3 years ago

I wonder if we can't rely on package maintainers don't create advisories, if there's a way to authorize a third-party to create it for them. One solution that came to mind a couple days ago was if package maintainers were to authorize a Github app that gave the NodeJS ecosystem WG the ability to publish HackerOne vulns on their behalf. 2 hangups are, they'd have to authorize an app one time and I don't see a way to create vulnerability disclosures in the Github API.

@infin8x do you know if Github will allow creating vulnerability disclosures via the API anytime soon?

SecurityCRob commented 3 years ago

This is something Red Hat has done for upstream projects for about two decades (as do my pals at Canonical and SUSE since we're CNAs). We have assisted in multiple coordinations across the ecosystem. Do you need some good practice guidelines?

MarcinHoppe commented 3 years ago

@RedHatCRob the main goal we have here is to remove friction from the process and automate as much of it as possible.

We'd love to hear about your experiences here. We can talk about it some more in the meeting on Monday!

joshbressers commented 3 years ago

I like this issue.

The real purpose of a CVE ID is to give us a unique identifier for a security flaw. I do not expect the CVE program to speed up in the near future (they are working on things to go faster, but we're talking about years probably). A github ID will accomplish the same thing.

It also occurs to me that #17 speaks of SWID, but if we frame that discussion in this context, PURL is a superior identifier as it lacks the friction of SWID.

Foxboron commented 3 years ago

A Github ID does not accomplish the same thing, and it's extremely important to realize. There is a security framework built around having one agreed upon way to communicate vulnerabilities. Linux distribution, and companies, patches CVEs as a policy, and now having to also care about a Github CVE replacement is going to cause headaches. The likely scenario is that these are going to have CVEs requested regardless.

If one ruby vulnerability now gets two IDs there is no way to know if we are speaking of the same thing, and the data about this issue is also fragmented into own data silos (GIthub/Hackerone/MITRE). At this point we are not improving vulnerability disclosure, we are working against it and there is no clear way of nesting up the data.

I agree that MITRE isn't perfect and modernizing the CVE infra is important, but replacing them with a github ID is the wrong direction.

JasonKeirstead commented 3 years ago

I agree with @Foxboron;

Its important that the entire end to end security life lifecycle be considered, as it does not stop after a code fix is consumed in a downstream application. The end goal is to get that fix furthermore deployed in an application patch, that is then deployed widely in enterprises.

Enterprise vulnerability and patch management products need to be able to determine what unpatched vulnerabilities exist, in order for patches to be prioritized and deployed. The databases these tools operate on are keyed on a select few industry standard vulnerability regiemes, CVE being one.

It is within the realm of possibility that OSSF could, over time, gain enough mindshare that a vulnerability ID scheme of its own creation could also be integrated into all of these systems, but that will take a long time, it wont happen overnight. Until it does, CVE really should be considered critical in order to ensure fixes actually make it to the end consumer.

mattlorimor commented 3 years ago

I align pretty heavily with @Foxboron and @JasonKeirstead, here.

There is a security framework built around having one agreed upon way to communicate vulnerabilities. Linux distribution, and companies, patches CVEs as a policy, and now having to also care about a Github CVE replacement is going to cause headaches.

This is important. Entire compliance frameworks are built around CVE IDs and CVSSv3 scores. Almost every dependency and package scanning tool I've used treats the CVE ID as the identifier. There's a reason for that; it's the gold standard.

That said, it doesn't negate concerns around the CVE request process and inefficiencies within it. If 49% of all publicly-disclosed vulnerabilities go without a CVE ID, that could be cause for concern. And I'm sure there are a myriad of reasons why that may be the case.

If part of the struggle is dealing with MITRE, an attempt should be made to get representative participation from them here. They have a large stake in this world.

I would be willing to bet the onus for a non-trivial share of vulns not receiving CVE IDs falls squarely on the shoulders of those who discovered the vulns or those managing them in their own sub-ecosystem. Whose responsibility is it to get a CVE ID requested for each RUSTSEC advisory, for example? There's a lot of volunteerism and kindness of strangers going on in this world.

I think it's fine that GitHub creates GHSAs and uses them internally to facilitate their tool development, but if GHSAs are being released without CVE ID, I would question why.

It's fantastic that the sub-communities are attempting to track and disclose, but, at this point in time, that shouldn't be instead of following the widely-accepted standard of requesting a CVE ID from MITRE.

Foxboron commented 3 years ago

If part of the struggle is dealing with MITRE, an attempt should be made to get representative participation from them here. They have a large stake in this world.

I was contemplating adding this as a followup comment, and I do think it's important to have MITRE as part of this working group indeed.

Whose responsibility is it to get a CVE ID requested for each RUSTSEC advisory, for example? There's a lot of volunteerism and kindness of strangers going on in this world.

This discussion is probably offtopic (a little bit), but the rustsec team should clearly apply to be a CNA and assign their own CVEs. I have no clue if they have looked into doing this.

mattlorimor commented 3 years ago

This discussion is probably offtopic (a little bit), but the rustsec team should clearly apply to be a CNA and assign their own CVEs. I have no clue if they have looked into doing this.

I, arguably, shouldn't have picked on them. They were simply the first that came to mind. As I'm sure you can see, I was simply trying to point out that if part of the problem of vulns going without a CVE ID is one simply not being requested, it's not necessarily clear whose "job" it is to even request one. The person that found the vuln can request one. Larger projects may have established guidelines for who does this. Smaller projects frequently either do nothing or figure out what to do as they go along.

bwillis commented 3 years ago

Agree with @Foxboron and others here, the intent here wouldn't be to replace the CVE assignment process-that has value, and in parallel we should apply for a CVE to ensure we're capturing that value. But I do think that if the goal is still to get open source vuln fixes into the hands of consumers faster, we probably need to think about this de-coupling and moving this closer to the vuln working groups and maintainers.

If part of the struggle is dealing with MITRE, an attempt should be made to get representative participation from them here. They have a large stake in this world.

We can certainly bring MITRE into the loop, but even given a short time to publish, but is it always lagging. One anecdote I know from Rails is, they request a CVE (unpublished) and use it to create their security advisories, then after advisories are public, they are used in their CVE references - pretty standard. As a CNA, we hope to get this information from them in a timely manner, but in reality, it is days after the security advisory and fix is published when the CVE information is published. @reedloden will give us some more insight into this at tomorrow's meeting.

kurtseifried commented 3 years ago

If you are hosting in Github you can use Github as a CNA to get CVEs. Becoming a CNA is NOT something most OpenSource projects should do for two simple reasons:

1) Becoming a CNA is only appropriate if they have a mature security response process and sufficient resourcing to support it, it requires care and feeding 2) Most OpenSource projects don't do enough security vulns to make this worth while (e.g. you shouldn't even think about being a CNA until you're doing tens of security vulns/releases per year).

Just fill in the github form and get your CVEs this way. It's less work than becoming and maintaining CNA status.

JasonKeirstead commented 3 years ago

I think the idea of OSSF (or a WG of the OSSF) becoming or supporting a CNA - that OSSF can provide as a service to the open source community to help with that bottleneck - is something we should explore proposing - this would very much align with the OSSF mission.

In general as well - we need to be careful to not let Github become an dependency, as the OSSF mission is wider than only for Github hosted projects. Obviously that does not mean that solutions that help the large number of projects on Github should not be considered, I am just saying Github can't be a gate for the overall mission.

joshbressers commented 3 years ago

If the OSSF would be willing to fund being a CNA for open source, I think that would be a phenomenal public service, but I also suspect this is a much larger conversation that will need more time than we want to wait for it to shake out.

If we look at the CVE data image

CVE appears to be at a plateau today. Expecting them to be able to accept a large influx of open source IDs may not be realistic without planning from MITRE.

I would suggest we split this work into two pieces. The MVP that created this issue, and the discussion around CVE.

JasonKeirstead commented 3 years ago

The interesting thing about this proposal, to me, is that it does not require much hard-dollar funding... it is at the heart actually a people problem. Combine this with the fact that many/most OSS participants already perform this activity (many are CNAs themselves), except it is done in silos - in some ways this is duplicative as multiple orgs are potentially working on the same thing tied to an open source component.

In theory, moving some resources under the OSSF to support a community effort could be a net cost savings, not just for open source as a whole, but even for each individual org, as then you could leverage that entity for any open source projects your org works on instead of using your internal process.

MarcinHoppe commented 3 years ago

My experience participating in a community driven security triage / response effort of non-trivial proportions (https://github.com/nodejs/security-wg and https://hackerone.com/nodejs-ecosystem) suggests this is not sustainable if based on volunteers alone. I am happy to discuss the details.

I'd be happy to have this discussion but it the effort probably requires funding in one form or another (companies donating dollars or employee time).

That's not to say breaking down the silos and providing vendor neutral open source PSIRT-like service under OSSF/LF is not a worthy goal. It absolutely is, in my opinion.

kerberosmansour commented 3 years ago

Is anyone planning to reach out to the CVE community to validate if they are prepared for that influx of data?

MarcinHoppe commented 3 years ago

@kerberosmansour Do you think this MVP alone could generate meaningful increase in the volume of requested CVEs? I hope that as new CNAs are added, the growth in volume of requests is accounted for.

Still, I think it would be amazing to have someone from that community participate in the WG.

mayakacz commented 3 years ago

I don't believe the issue today for open source projects is obtaining a CVE. The issue for researchers is finding the 'right' maintainer to report it to, and getting that maintainer to care to enough to fix the issue and publish an advisory.

My experience participating in a community driven security triage / response effort of non-trivial proportions (https://github.com/nodejs/security-wg and https://hackerone.com/nodejs-ecosystem) suggests this is not sustainable if based on volunteers alone.

:100:

Agree, the bulk of the work to be done on the maintainer side is for triage.

Goal:

Reduce the time it takes to less than 7 days vulnerabilities disclosed on HackerOne to be surfaced by package users on Github.

According to this report, 52% of CVEs for open source are issued within 3 days, and 24% take over one month. GitHub doesn't provide an SLA for CVE issuance (but let us know if you need one to feel more confident using the service) - but in practice our team reviews these in O(days).

I do not believe accelerating or simplifying the process for a maintainer to attain a CVE will have quite as large of an impact as you are hoping.

That being said, I think tighter integrations with bug bounty programs, and tighter integrations with open source ecosystem maintenance teams, can only help improve the overall processes.

bwillis commented 3 years ago

I think the conversation derailed quite a bit from my initial intent. There are so many problem areas along the open source security lifecycle and this was specifically looking at optimizing the time from vulnerability resolved and patch available to it being ready for a user to update. It is by no means the highest priority or most impactful, but it looked like a decent start considering it was measurable, the people we had in the group and most importantly, it was a tangible use-case to apply our hypothetical thinking around a "unified format". Even in it's limited impact, it would require some work on an integration from H1 to Github (such as publishing vulns from H1 to a Github API) but unsure if we have any resources to move forward on it at this time.

Now that we have a larger group, I would encourage us to explore many of the other great ideas mentioned here and in our meetings for MVPs that will help make this group impactful.

JasonKeirstead commented 3 years ago

@bwillis There are so many problem areas along the open source security lifecycle and this was specifically looking at optimizing the time from vulnerability resolved and patch available to it being ready for a user to update.

Basically what @Foxboron & some of us are saying is, if you optimize that process, but the user either can't consume patch or won't even know to patch (both of which can & will happen if there is no CVE), then optimizing that process has not accomplished it's goal. You've created a faster fix, but getting anyone to consume it is an important concern.

Agree w/ @mayakacz that we need to make sure we're tackling the right problem and not just adding yet-another-vuln-management-process...

At the end of the day this whole problem area of CVE being too slow and too centralized, is why OSVDB was created back in the day - and it filled this gap for over a decade - unfortunately they had to shutter after ~12 years due to lack of support...

SecurityCRob commented 3 years ago

I align pretty heavily with @Foxboron and @JasonKeirstead, here.

There is a security framework built around having one agreed upon way to communicate vulnerabilities. Linux distribution, and companies, patches CVEs as a policy, and now having to also care about a Github CVE replacement is going to cause headaches.

There is no "the open source". Every community works slightly differently, has differing goals and processes. Orgs like us, SUSE, and others have historically worked with those team to setup good practices that are lightweight on the developer end and digestible by end-users (establishing the ability to privately triage bugs and to include security experts in the analysis, using tools like CVE to uniquely identify the vulnerability, and things like CVSS to help describe how the vulnerability works).

This is important. Entire compliance frameworks are built around CVE IDs and CVSSv3 scores. Almost every dependency and package scanning tool I've used treats the CVE ID as the identifier. There's a reason for that; it's the gold standard.

I hope we ALL can agree that CVSS != Risk.

That said, it doesn't negate concerns around the CVE request process and inefficiencies within it. If 49% of all publicly-disclosed vulnerabilities go without a CVE ID, that could be cause for concern. And I'm sure there are a myriad of reasons why that may be the case.

If part of the struggle is dealing with MITRE, an attempt should be made to get representative participation from them here. They have a large stake in this world.

Someone else cited some numbers (GitHub I think), I'm unsure that getting a CVE Id is the problem, many of us have brokered this for years and and MITRE has made great improvements in their response time, tooling, and the CNA program to allow more responsible parties to participate. If people in this group are qualified and interested in the CNA program, many of us here can provide more information on that.

I would be willing to bet the onus for a non-trivial share of vulns not receiving CVE IDs falls squarely on the shoulders of those who discovered the vulns or those managing them in their own sub-ecosystem. Whose responsibility is it to get a CVE ID requested for each RUSTSEC advisory, for example? There's a lot of volunteerism and kindness of strangers going on in this world.

Right, that is exactly how open source works. Open Source only works when there is a vital set of contributors, maintainers, and collaborators. It is hard work to make "free" software, anything funded companies can to to ease their burden helps the whole ecosystem. If people want to help a particular community or project in responding better, I'd encourage them to join those communities and offer to help. As a group here we can work out a set of good practices that can widely be used and implemented for any size project. That's been our model. We invest in the projects our customers care about by joining and working in those communities.

I think it's fine that GitHub creates GHSAs and uses them internally to facilitate their tool development, but if GHSAs are being released without CVE ID, I would question why.

Internal tracking ids are fine, but every confirmed vulnerability should have a unique identifier (CVE) so everyone working to fix the issue or that is impacted by the issue is talking about the same thing. We never should encourage public use of alternate schemes without a strong coalition of industry/security backing. "Format wars" district end-users from actually understanding and managing their risk.

It's fantastic that the sub-communities are attempting to track and disclose, but, at this point in time, that shouldn't be instead of following the widely-accepted standard of requesting a CVE ID from MITRE.

(edited typo)

SecurityCRob commented 3 years ago

There are so many problem areas along the open source security lifecycle

I think making assumptions about precisely what a consume of an open source component is the first area we need to clarify. Working within the structure and rules of the project is critical for your success. If YOU need a longer lifecycle for YOUR product, you'll need to plan on how you're going to manage backporting or creating patches once the upstream dev has moved on (or consider working with them to find a way for them to provide longer support). (edited typo)