bugcrowd / vulnerability-rating-taxonomy

Bugcrowd’s baseline priority ratings for common security vulnerabilities
https://bugcrowd.com/vrt
Apache License 2.0
430 stars 79 forks source link

Adding a category for disclosed/leaked usernames and passwords #254

Closed theGmoney closed 4 years ago

theGmoney commented 5 years ago

Wanted to open a conversation around adding a VRT category for leaked/exposed usernames/passwords. I recognize this is a pretty grey area, and there's a lot to be discussed around the legality, as well as ethical nature of logging into accounts that have been leaked but here's the scenario:

  1. Company A has users that have registered to other services (we'll call them company B) with their work email and a password.
  2. Company B gets breached, and a password dump posted online.
  3. An attacker goes through the password dump and tries all the user/pass combos against the assets/logins of Company A.
  4. Using some of these accounts, the attacker is able to gain access and compromise systems of Company A.

So, given that we understand this is a potential real risk to organizations, and that the current climate de-incentivizes researchers from reporting these things right now - despite there being real value to knowing about compromised accounts. How should this be handled going forward?

The code of conduct specifically prohibits logging into accounts you don't own, and for good reason - it's not a great idea to have people actually logging into potentially compromised accounts (who knows what they'll see, or what alarms might get triggered) - but how else does one prove the validity of such a finding? Furthermore, are the companies not better off for knowing about these findings? If we agree there's value to them, then it's important for clients to be aware, right?

Again, there's a lot of grey area here, but think it's a conversation worth opening and discussing - even if the outcome is that this sort of activity is explicitly forbidden - at least we then have a concrete path forward. But as it stands, I think it's worth covering as-is. This is vaguely adjacent to leaked keys, etc - but those are usually fewer in nature, and high impact as a whole.

I definitely have some personal thoughts on how to handle, but want to open it up for discussion prior to tainting the waters with my opinion. Let me know if you have any questions. Thanks!

codingo commented 5 years ago

I've broached this with some people at Bugcrowd before and the consensus was if you can prove that credentials are still valid on an endpoint it will be accepted, but not if it isn't.

The other challenge is attribution. Collection 1/2/3/4 are all openly available, but a good portion of the credentials within them aren't attributed, so a submission of disclosed credentials would either need to just contain a list of company e-mails (which could be faked with a LinkedIn scrape) or would need to disclose employees passwords to the security team (which may cause internal issues). This isn't a problem in the trusted adviser position of pentesting, but I can see it posing an issue for bounties.

Additionally, I'm confident I hold more of this data as most - do I then duplicate against soimebody who has just submitted the data from torrentable databases (1/2/3/4 specific), or do I get a new bounty if I submit more e-mails that they have missed? If they are just submitting torrentable databases, do we reject this claim since those are >4years old? Where is the line?

I love the premise, it would be an easy one to teach people to report, but it's quite low value, as most people will go for lower value data, and I significantly doubt triages ability to handle this in a manner which provides value back to the programs beyond a P5 finding.

codingo commented 5 years ago

In regards to The code of conduct specifically prohibits logging into accounts you don't own - this is one of the ideals behind a safe harbor. If I find an API key or credentials on github, it's in our remit to test them and submit them if valid. I see this area to be no different.

theGmoney commented 5 years ago

In this case, I think the process/issue is best addressed with a program-specific decision around "Do you accept opsec issues that are otherwise not explicitly in scope?"

From here, depending on their answer, the program owner could explicitly state that non-in-scope issues are to be reported to their security@ email, VDP, or other location - or set explicit parameters around reporting such issues "security issues that affect X organization, but are not application level findings against the in-scope targets will be accepted by this program, but will not be rewarded monetarily, or with kudos." Something that allows people to submit if they've got such findings, but doesn't encourage it - because, as you've pointed out, the threshold for acceptance and value is impossibly vague and varying.

That said, I'm a little reticent to compare leaked API keys on github to personal login details - I get that they're very close to eachother, but keys are usually org specific, while user:pass are user specific. Mostly because if we agree that they're roughly the same thing, then there's nothing to stop a deluge of purported P1s, since stuff leaked on github is quite commonly rated (and rewarded) at that level. I'll concede that the outcome could be similar in exceptional cases, but again, you have to login to know (though useless keys are P5'd, so it should wash out; but again, the added noise is unlikely to be seen as valuable by program owners - especially if it's from a 2014 data dump). On that point, I believe Safe Harbor only covers research that's in accordance with the program brief - so maybe the corollary to all of this is having program owners explicitly state whether or not one can or cannot login with credentials they find (I feel like this is where all these points are ending up... on the brief)? I get showing impact, but know that if you find a hardcoded string labeled "super_secret_aws_key", it feels reasonable that one could just submit it as "I found this sensitive thing"; I may be naive, but that's at least how I've historically handled such things - as opposed to testing out the key, seeing what it has access to, etc.

All that aside, it feels like most of this has to be addressed at a program level, and on the brief itself (it should be relatively easy to create some templated text blocks/options) - though I'm also open to any all suggestions for streamlining, etc. Thanks!