Closed EdOverflow closed 6 years ago
Hey @EdOverflow, we appreciate your feedback and would like to address each point below:
You're completely correct in calling out the missing "context" from the discussion within VRT. That said, we don't actually see that context being brought in within the VRT level, but at a higher level (perhaps at target?). As mentioned in the description:
Bugcrowd's VRT outlines Bugcrowd's baseline technical severity rating – taking into account potential differences among edge cases – for common vulnerability classes.
With VRT scoped to building out a taxonomy with baseline technical severity, we are able to create a consistency across all managed Bugcrowd Programs of one's technical severity rating. By moving the payment concept to a higher level, one can specify certain architecture as more valuable and appropriately incentivize research on it with higher payouts. This will also have the benefit of moving away from technical severity as the lever to change when something is not as valuable, to adjusting the incentivization (payment) on said target. This concept is not yet in the platform but something we have been thinking about.
I would like to better understand why you would like VRT and CWE selections within the submission form? #99 was only asking for a mapping to be used customer side, just like our CVSS mapping. When building out our CVSS mapping, we talked with a bunch of researchers, notably from our researcher hive and all were consistent in asking us to solely have VRT when making a submission, thus allowing CVSS, and soon CWE to be mappings customers can use/manage for their internal purposes.
Edit: This mapping has since been added in #112.
That is a good point, since we review issues as they come up we can remove that line and look into developing a more robust tag set to manage issues.
Edit: This has since been completed in #113.
Concerning the following statement:
That said, we don't actually see that context being brought in within the VRT level, but at a higher level (perhaps at target?).
Evaluating the bounty amount at a higher level is a fantastic idea and having a priority metric such as the VRT can definitely be a big help in the remediation process. The issue with this though is that this is very often not how it is being used. Programs are mapping bounty amounts directly to the priority score in their security policies. In some cases, the table mapping the amounts almost seems futile, because the program then notes that they use CVSS. [1] That all being said, you stated that this is currently changing and that you plan on putting something in place to address this issue so I will wait and see how that turns out before commenting on this particular concern any further.
I would like to better understand why you would like VRT and CWE selections within the submission form? #99 was only asking for a mapping to be used customer side, just like our CVSS mapping. When building out our CVSS mapping, we talked with a bunch of researchers, notably from our researcher hive and all were consistent in asking us to solely have VRT when making a submission, thus allowing CVSS, and soon CWE to be mappings customers can use/manage for their internal purposes.
To be honest, I do not think that deciding where to place the CWE section is a major issue. Personally, I would not mind having it present during the submission process or only allowing the teams to select the CWE category. That said, now that you have brought the CVSS mapping to my attention, I noticed that you are only evaluating the Base Score metric. This is good for prioritisation — the VRT's goal — but the issue is that Bugcrowd does not appear to allow programs to then edit the score later to determine the severity based on all three CVSS metric groups. [2] Admittedly, the Base Score is the only metric required by the National Vulnerability Database (NVD), but research has shown that evaluating the bounty amount based on the technical severity — the Base Score — only, is inaccurate and has a tendency to under evaluate the bounty amount. [3] I would suggest either increasing the CVSS calculator on Bugcrowd to include all three metric groups or informing programs on the issue with only using one metric group.
On a side note, I would encourage you take a look at a recent piece that @TomNomNom and I published on evaluating the bounty amount based on all three CVSS metric groups: https://edoverflow.com/2017/the-math-behind-bug-bounties/. This definitely requires a lot more work, but is an interesting concept that I hope your team will keep in mind during development in the future.
I just saw the update and the new version of the CONTRIBUTING.MD
will hopefully reduce any confusion. However, I am not entirely sure why this ticket was labeled as a "question", since there does not appear to be a single question in my previous comment.
👍
Admittedly, the Base Score is the only metric required by the National Vulnerability Database (NVD), but research has shown that evaluating the bounty amount based on the technical severity — the Base Score — only, is inaccurate and has a tendency to under evaluate the bounty amount.
While you have pointed to an outlier program, the platform does not set technical severity based on CVSS unlike we do with VRT. So while customers can up/down grade the severity rating (we ask they provide reasoning to the researcher), we see the grand majority using VRT mapped severity to determine payouts. The utility of the CVSS/CWE and other mappings is for the companies' own internal policies and processes. Due to that we have scoped the calculator to how most of our CVSS-using customers used it, within their internal workflow, only leveraging the base score.
On a side note, I would encourage you take a look at a recent piece that @tomnomnom and I published on evaluating the bounty amount based on all three CVSS metric groups: https://edoverflow.com/2017/the-math-behind-bug-bounties/.
I read it the day it was published 😄 . Definitely an interesting concept but at this time Bugcrowd is committed to using VRT as our de-facto scoring mechanism. We decided against CVSS due to the "opinion" nature of the scoring which the researcher is unable to debate on due to their limited knowledge of the company's architecture/setup (ie black-box testing). With VRT and our upcoming changes to how we communicate rewards in relation to severity and "context" we are hoping to have a transparent rubric that is used to score a submission and land at a payment amount.
I can see how question
isn't the best descriptor, I'll add a needs discussion
tag to better filter on similar issues.
While you have pointed to an outlier program, the platform does not set technical severity based on CVSS unlike we do with VRT.
I assume by "outlier program" you are referring to the NVD. The NVD most definitely store the technical severity of an issue — the Base Score — alongside the description [1].
The following statement: "unlike we do with VRT", is precisely what I am on about — you are basing a priority score on the technical severity and then as you mention below, you map it to the bounty amount.
So while customers can up/down grade the severity rating (we ask they provide reasoning to the researcher), we see the grand majority using VRT mapped severity to determine payouts. The utility of the CVSS/CWE and other mappings is for the companies' own internal policies and processes. Due to that we have scoped the calculator to how most of our CVSS-using customers used it, within their internal workflow, only leveraging the base score.
My second point is your "CVSS-using customers" are clearly not using CVSS properly and, in my opinion, you have a responsibility to educate them on evaluating bounty amounts correctly. "Severity" and "priority" are while related, not synonyms, and programs appear to be very often paying the amounts mapped to the priority score. Also just to make things very clear here: "VRT mapped severity", this is purely the technical severity, so in order to prevent any confusion it might be best that we actually state that every time (i.e., "VRT mapped technical severity"). So, in other words, you appear to be saying: "we see the grand majority using VRT mapped technical severity to determine payouts".
We decided against CVSS due to the "opinion" nature of the scoring which the researcher is unable to debate on due to their limited knowledge of the company's architecture/setup (ie black-box testing).
CVSS is precisely not opinionated; the VRT appears to be. I get the impression that you simply decided that Cross-Site Scripting (XSS) > Reflected > Non-Self
is P3. What is that based on? CVSS is by no means perfect, but I believe it is closer to what I personally think programs should be using to evaluate the severity of the issue.
On a side note, since this has already been quite a long discussion, I would just like to add I am really impressed by how friendly you have been and that you are always willing to discuss my concerns with me. On top of that, I believe Bugcrowd's goal of standardising this part of the disclosure process is a step in the right direction.
Hey @EdOverflow,
Sorry for my absence, in this PR since the last comment. As per CVSS vs VRT, I understand your concerns and we will take it into consideration on future CVSS related work. At this point I think the conversation can be closed and we can bring up individual issues within here as they come up. Thanks for taking the time to provide feedback and we look forward to seeing you around the platform!
Thank you very much for taking the time for discussing this with me, @barnett; I really appreciate the fact that you are open to feedback and willing to discuss my concerns.
Dear Bugcrowd,
First of all, I would like to thank your team for opensourcing this project and allowing members of the community to contribute to the development process. I have thought long and hard about the core concept of this project and have come to the conclusion that there are some big changes that I would love to see Bugcrowd make in their approach. I will break down my two major concerns and a minor one into individual sections and try my best to always give some suggestions as to how you might be able to resolve these issues.
Associating the security vulnerability class with a score
This is the biggest point by far, because this is related to the actual idea behind this project. The VRT currently associates a vulnerability class as seen on the OWASP Top 10 list with a severity score. In my opinion, and I have said it many times before in the past (https://twitter.com/EdOverflow/status/931538905668702209), programs should not take this approach when evaluating the bounty amount and sadly this is far too common (I have made this mistake in the past too). Researchers are left disappointed when all of a sudden the program sets a lower priority score for something that the VRT told them was a P2, for instance. This has not only caused confusion within the community (I regularly get DMs about this issue), researchers are starting to falsely state what the issue type is hoping that they will get a higher score.
The priority score should be determined by the impact and what the company's threat model is.
Broken Cryptography > Cryptographic Flaw > Incorrect Usage
having access to test data "usually" does not have the same impact as the same issue allowing the hacker to access PII. Therefore my suggestion is that you rethink the core concept of this project and consider redesigning it so that the impact has a direct correlation with the score and not the security vulnerability category. On top of that, it could even help if Bugcrowd added some sort of functionality that enables the program to supply the platform with their threat model and ensure that this influences the score.Your README.md does state the following, but personally I see this as a way to simply unintentaionally cover up the real issue here:
As stated earlier, context should be foremost when evaluating an issue.
In my opinion CWE, summarize this concern best: "Prioritizing Weaknesses Based Upon Your Organization's Mission". [1] This brings me to my next section.
Missing Common Weakness Enumeration
After considering the first section, I am hoping that the security categories would be a separate option in the submission process as seen on HackerOne. HackerOne currently uses the Common Weakness Enumeration list in section 2 of the submission form. This has already been suggested to you before in https://github.com/bugcrowd/vulnerability-rating-taxonomy/issues/99, but personally I would like to see this as a separate dropdown option on Bugcrowd, therefore you might need to restructure the current VRT JSON mapping in order to accommodate for this change.
I would like to emphasize the fact from section one again, this option should not directly influence the priority score.
Rewording the CONTRIBUTING.MD file
Currently you state the following:
This might be confusing for some as only members of the Bugcrowd GitHub organization can label issues. For example, I cannot label the following ticket.
In addition, I would suggest increasing the number of labels so that it is easier to refine one's search while looking for previously submitted tickets. You may use https://github.com/securitytxt/security-txt as an example of how this can be done.