bugcrowd / vulnerability-rating-taxonomy

Bugcrowd’s baseline priority ratings for common security vulnerabilities
https://bugcrowd.com/vrt
Apache License 2.0
434 stars 80 forks source link

Broken Authentication and Session Management - Weak Login Function Changes #180

Closed jhaddix closed 6 years ago

jhaddix commented 6 years ago

Hello all,

There has been a massive amount of conversation about this bug... all over the place. I don't really re-hash all that. While I do think that it is valuable for clients to know about, I suggest the following changes:

and

plr0man commented 6 years ago

Another option if we agree with the decreased baseline rating would be to go back to the old (v1.3) classification with a downgrade from P3 to P4: broken_authentication_and_session_management.weak_login_function.over_http

truemongo commented 6 years ago

I might be wrong, but I believe the conversation spilled over from Twitter. Correct me if I am wrong. If this is about the earlier discussion on Twitter, I don't think any amount of VRT changes will address either of the core issues:

A few questions: 1) Do customers know exactly what sorts of "bugs" are "hiding away" in P3-P4 VRT. Do they realize they will have to pay for those once a program launches? Do they realize that a typical company has a ton of those "bugs", which in a lot of cases are known, and that they will be on the hook to reward them once a program launches? A lot of these bug types should be opt-in, not opt-out. 2) Do you feel that the treatment given to these low severity bugs is very different in a paid program vs a kudos only program? Do you see any program accepting paying rewards for 5+ (let alone 200+) "weak login" "bugs", no matter what the severity? 3) Do you feel that the criteria for duplicates is much much lower in kudos only programs than it is for paid programs? Think for example the exact same bug on 10 different language sites, with a clear root cause which is the same. On kudos only programs, that is 10 reports which are accepted, whereas in the paid program the client will push back (rightfully so), and make it 1 report.

I could go on, but I think you get the point. This will not get solved simply by changing VRT priorities (I though that would be enough at some point, but clearly it is not).

jhaddix commented 6 years ago

Hey Mongo!

This is one part of a response to the overall feedback we are receiving. We are currently addressing it as follows:

  1. Review certain VRT classifications (this one in specific)
  2. Investigate alternate leader boards for researchers
  3. Investigating a Skills Rating Taxonomy
  4. Investigate overall Kudos points system adjustements

To answer your questions:

1.

Do customers know exactly what sorts of "bugs" are "hiding away" in P3-P4 VRT

Yes, every customer is on-boarded with a Solutions Architect and given a rundown of the VRT. Many of them prune the VRT and many trust Bugcrowd defaults as well.

Do they realize they will have to pay for those once a program launches?

Yes, they realize they have to pay for those bugs.

Do they realize that a typical company has a ton of those "bugs", which in a lot of cases are known, and that they will be on the hook to reward them once a program launches?

This is an incorrect assumption (for this specific VRT entry). Several instances of your feedback are based off of "wide-scope" programs. In general, we have some programs that are impacted by this problem, but most aren't. Most have a scope where they only receive a few of each, and if they are unknown to the client, they take these happily.

Another incorrect assumption is that all of these are coming from humans "farming" when in fact some of them are imported by our customers before launch from automated scanners, previous pentests, vulnerability assessments, or other known-issues sources.

We are investigating larger scope programs and some VDPs in relation to P3 and P4 bug classes. This issue is an example of that, but by no means do I consider every P3 and P4 to need adjustment. XSS is in that range and I think that priority is correct. We will continue the normal process of reviewing the VRT for what clients deem valuable in their crowd sourced security assessments.

A lot of these bug types should be opt-in, not opt-out.

I disagree, but am happy to continue to work on polling our customers on payout ranges, priorities, and the VRT.

2.

Do you feel that the treatment given to these low severity bugs is very different in a paid program vs a kudos only program?

Good question. I would answer no, but I want to investigate your feedback here. The only instances I really know about this are when subs or vuln categories are systemic, across the whole site (CSRF, XSS, ++). We have added entries to the VRT in some of these cases (CSRF) and have dilution monitoring bots letting us know (in any possible case) when a paid program is getting close to pool dilution.

What I think you're trying to find here is some kind of abuse or blind acceptance of subs. We had one customer try to do this, not maliciously, but as a (wrong) way to try and incentive their program more. We promptly noticed this and educated them to use other methods.

All bugs go through the ASE's and the VRT. Each instance and domain is reviewed. On everything.

Do you see any program accepting paying rewards for 5+ (let alone 200+) "weak login" "bugs", no matter what the severity?

Another good question based off of your example on Twitter. I can say, yes. Of the large scope programs we have, some are paying for that P3 variant. One customer is paying $50 per instance. Another customer is paying the program minimum of $100. It really seems to depend on what the site is and how important the credentials are for it. Obviously, we also have customers who at the outset of their program, with their Solutions Architect, when reviewing the VRT, remove it from scope.

3.

Do you feel that the criteria for duplicates is much much lower in kudos only programs than it is for paid programs? Think for example the exact same bug on 10 different language sites, with a clear root cause which is the same. On kudos only programs, that is 10 reports which are accepted, whereas in the paid program the client will push back (rightfully so), and make it 1 report.

For duplicates? No, our ASE team triages duplicates the same across all programs. Unique domains are considered individually. Regionalized, same-codebase, sites are often duped upon. If there is some sort of hybrid situation we usually work in private comments to figure out what the customer wants to do.

I could go on, but I think you get the point. This will not get solved simply by changing VRT priorities (I though that would be enough at some point, but clearly it is not).

This issue is def not the complete strategy for gamification, priorities, etc. I look forward to sharing more in the future.

I'd like to also touch on the fact that we (you, I, and others) discussed a lot of these issues 5 months ago. We had a lot of this work slotted into the product roadmap, but we abruptly needed to shift priorities and engineering to help the Rops and TechOPS teams with their workflow. Like all companies who work to keep a high bar of service work through a SaaS, we had neglected the operational overhead in some of that work. Regardless, we didn't communicate this well to you all, and for that we deeply apologize. We are working to continue to keep Bugcrowd the BEST platform to work on. Again, thank you for your feedback.

ryancblack commented 6 years ago

To this issue itself, I believe we should rate this as varies due to the highly contextual nature of the issue, expectation setting and outcome influencing (kudos programs) with a default priority, as well as impact to both customers and researchers. We've already set a precedent with the limited, but necessary, use of varies for similar balance and context justification like with IDOR and Server Security Misconfiguration - Directory Listing Enabled.

Similarly I feel the same should to apply to open FTP servers and exposed login panels for all of the reasons discussed above and elsewhere. These "discovery-style" issues should be entirely context based and currently have a non-trivial impact to the kudos economy. I will file an issue suggesting the same with additional detail as a RFC and follow-up.

To add to @jhaddix's response, we're also continuing to revise how we turn company bug bounty goals to informed brief policy. The concept of "opt-in" or otherwise detailed taxonomy review remains important and heavily dependent on policy, scope state, and the kudos or cash-eligible nature of the program.

To @truemongo: 2 Do you feel that the treatment given to these low severity bugs is very different in a paid program vs a kudos only program?

I've only seen this once and with the best intentions of encouraging engagement; the coaching and correction of this limited behavior was something I personally drove with the program in question. That said, companies own the final acceptance of a bug and the current status of some of these entries, combined with zero direct cost in kudos-only programs, does make this a risk. I believe the context-based approach, which requires a conversation of real impact, to encourage in-depth discussions that will mitigate such a risk.

As always, the goal of the VRT project is to enable appropriate classification of the depth and breadth of security issues yielded by well-run bug bounty programs. While this heavily influences individual (MVP) and platform (leaderboard) outcomes I believe changes to VRT to be only one piece of ongoing balance efforts. The focus should remain on the facilitation of accurate risk assessment and remediation prioritization. We really appreciate the candid feedback and thoughtful discourse!

jstnkndy commented 6 years ago

I agree with @ryancblack in moving these issues to varies. I think it is important for programs to consider this issue on a case by case basis to really understand what the actual security impact is for the specific instance that has been reported.

bugbaba commented 6 years ago

Just my thought on this.

How does marking this as varies by default going to solve the issue ?

Lets say there is a company with kudos program and has 500 subdomains of which 100 don't have SSL certificate installed.

So do you expect them to judge the severity basis on data on that sub domain ? and what's the range ? p3-p5 ? or p1-p5 ?

Don't you think they will see this as an extra work of analysis of domain instead of accepting all as doesn't hurts there pocket.

ryancblack commented 6 years ago

Thanks for your comment @bugbaba! Please see #181 as much of the commentary is related; you may have some insight and questions for it as well.

One central tenant of bug bounty is that a submission's value is both in and judged by driving remediation at a specific point-of-fix. For this example, if having a wildcard certificate on the TLD is not feasible or desired which of those subdomains actually need HTTPS? If some of those subdomains point to public-facing static sites they actually don't. If one report is enough information to understand both the issue and scope subsequent reports are not necessary and would be duplicates.

So do you expect them to judge the severity basis on data on that sub domain ? and what's the range? p3-p5 ? or p1-p5 ?

In all cases understanding the actual impact, both breadth and data at risk, would be involved in prioritization for fix and acceptance. It could be truly Not Applicable, to Informational P5, on to P4 - P1 all justified by impact and governed by reasonable norms.

Don't you think they will see this as an extra work of analysis of domain instead of accepting all as doesn't hurts there pocket.

On a managed bug bounty program duplicates are addressed as part of the service. There is no concept of bulk acceptance of distinct issues by design and truly-novel submissions each deserve appropriate review and prioritization. For paid programs, the financial cost of each distinct submission is self-policing; for kudos, ensuring justification of priority past de-duplication will be just one part of mitigating "rubber stamp" acceptance of questionably valuable submissions.

That said, as expanded on in #181, the goal of VRT is in facilitating appropriate classification and prioritization of security vulnerabilities, particularly the breadth of issues found in bug bounty. I continue to believe a conversation on true impact in this regard is much better than an erroneously high or low acceptance due simply to default recommendation.

shpendk commented 6 years ago

I like @jhaddix's option of downgrading better than "varies". I think that program owners will say "treat everything as P4" after about 2-3 subs of this nature. They're similar enough that setting a default priority works.

TroyCunefare commented 6 years ago

I would have to agree @shpendk I think were starting to steer towards varies a little much lately. I think options towards a downgraded version would work better for all.

plr0man commented 6 years ago

We analyzed the statistical data and it appears that only about 8% of this type of issues has received a rating lower than the mode of P3. There were no upgrades. Given significantly higher percentages of downgrades occurring in case of other unrelated VRT entries the team does not believe that this is a threshold that merits an adjustments to current classification.

ryancblack commented 6 years ago

@plr0man is spot on, to add - concerns over acceptance of this vulnerability class, particularly on kudos programs, these will be addressed by a combination of tooling to assist with determining novelty, revisions in pre-launch information gathering, and our run book for SecOps' triage.

The data in the VRT context of appropriate baseline is clear. We will leave this issue open for a period of comment and close.

Thank you all for your insight and involvement.