Closed gdestuynder closed 7 years ago
Hrm.. good point.
Those (currently) come from Veris/Verizon: https://github.com/vz-risk/veris/blob/master/verisc-labels.json
Should we alter ours on import? Convince them to change theirs? Alternate?
They are all the same to me. Except that using different terminology indicates you are talking about a different metric when discussing them.
It's all apples to oranges though. One company used p1 for high priority items while another used p3 for high, p0 being the lowest.
Same thing for severity imho. It's relative depending on who is consuming the informative.
I'm sure verizon has coding around their framework as many other organizations using it do. You could be one voice vs. many in that argument.
Just my two cents.
Alicia On Mar 8, 2016 4:43 PM, "Jeff Bryner" notifications@github.com wrote:
Hrm.. good point.
Those (currently) come from Veris/Verizon: https://github.com/vz-risk/veris/blob/master/verisc-labels.json
Should we alter ours on import? Convince them to change theirs? Alternate?
— Reply to this email directly or view it on GitHub https://github.com/mozilla/MozDef/issues/335#issuecomment-194040242.
@Phrozyn That's generally exactly one of the problems in the security community. See openssl advisories for example. Nobody knows what high or critical means to them and there's chatter about it every time.
That's why we use standardized levels with a complete description of each. Of course best would be if everyone standardized on something - but at least it helps within our own projects.
In this case it sounds like it would be interesting to get Verizon's take on it, regardless of the outcome. Note that in code, we do conversions anyway, all the time - no choice :) Ex: vuln2bugs normalizes CVSS and some other stuff into the Mozilla standard levels (surprisingly easier to communicate than CVSS also ;-)
Hi Guillaume!
Thanks for this, being mainly operational and coming from a place where dev time is hard to come by, this really helps me understand it.
Doing conversions just makes sense. I've never really encountered anyone having issues with levels, That is why I responded in the fashion I did.
Carry on!
Alicia On Mar 10, 2016 3:06 PM, "Guillaume Destuynder" notifications@github.com wrote:
@Phrozyn https://github.com/Phrozyn That's generally exactly one of the problems in the security community. See openssl advisories for example. Nobody knows what high or critical means to them and there's chatter about it every time.
That's why we use standardized levels with a complete description of each. Of course best would be if everyone standardized on something - but at least it helps within our own projects.
In this case it sounds like it would be interesting to get Verizon's take on it, regardless of the outcome. Note that in code, we do conversions anyway, all the time - no choice :) Ex: vuln2bugs normalizes CVSS and some other stuff into the Mozilla standard levels (surprisingly easier to communicate than CVSS also ;-)
— Reply to this email directly or view it on GitHub https://github.com/mozilla/MozDef/issues/335#issuecomment-195091237.
"it would be interesting to get Verizon's take on it" Maybe I/we should reach out to https://github.com/whbaker for his take
Hello everyone. First of all, thanks for working with and referencing VERIS. It hasn't received much love lately because many of us who used to work with it a lot have left Verizon. But since it is open and no longer a Verizon-only thing, I've been meaning to pick it up again at some point soon. Stuff like this can help shape it.
Re your question on impact levels, I'll try to clarify. That impact scale is intended for use in a post-incident scenario to provide a qualitative rating of "pain" to the business. VERIS also included fields to record a quantitative estimate. I'm not sure about the context in which you're using it above, but I see mention of CVSS. My sense is that it's probably not optimal for something like that.
If you're trying to figure out how well it maps to a different scale - maximum,high,medium,low for instance - I suggest reviewing the expanded labels:
"overall_rating": { "Catastrophic": "Catastrophic: A business-ending event (don't choose this if the victim will continue operations)", "Damaging": "Damaging: Real and serious effect on the \"bottom line\" and/or long-term ability to generate revenue", "Distracting": "Distracting: Limited \"hard costs\", but impact felt through having to deal with the incident rather than conducting normal duties", "Insignificant": "Insignificant: Impact absorbed by normal activities", "Painful": "Painful: Limited \"hard costs\", but impact felt through having to deal with the incident rather than conducting normal duties", "Unknown": "Unknown" }
Hopefully that will help you figure out whether it'll fit.
On the point of standardization, I couldn't agree more. In fact, one of the reasons I haven't done a lot with VERIS lately is that many VERIS elements have been incorporated into STIX as part of the data model or recognized vocabs. That's not to say they can't both exist, but if STIX has the momentum, I'd rather influence and incorporate rather than "compete." And FWIW, STIX does leverage the VERIS impact rating for it's Incident:Impact_Assessment construct. https://stixproject.github.io/data-model/1.2/stixVocabs/ImpactQualificationVocab-1.0/
Hope that helps. Let me know if that opens more questions than I answered.
That does help, thanks Wade. I should have given you some context.
We recently published the levels we use: https://wiki.mozilla.org/Security/Standard_Levels
Basically telling ourselves to only use: low, medium, high, maximum since it offers a readily understandable range. Guillaume is rightly pointing out that an immediate discrepancy within Mozilla is MozDef's use of Veris since the levels don't match.
That's good info that STIX is a likely synchronizing point. I'm not exactly a fan FWIW and find it cumbersome. MozDef for example uses VERIS to allow one to tag incidents which given the schema is straightforward.
Not to turn this into a STIX discussion, but tagging doesn't seem like a primary use case for STIX when compared to VERIS? [reference: http://stixproject.github.io/about/#veris ]
Yeah I'm in fact quite happy with the VERIS tagging in MozDef - it's just that it could use standardization with our other tooling. Also, this is great info and thanks!
Here's some more info/context: The CVSS example is an example of conversion from "standard X [here CVSS]" to Mozilla's standard levels. Here's the actual conversion for this case: https://github.com/gdestuynder/vuln2bugs/blob/master/vuln2bugs.json.inc#L45
The STIX levels are interesting if they gain traction (https://stixproject.github.io/data-model/1.2/stixVocabs/ImpactQualificationVocab-1.0/) however in my experience, the advantage of using more generic terms (low-medium-high-maximum for ex - notice how it's not "critical" but maximum in particular) helps a lot when attempting to apply the same levels with the same meaning to different the different metrics we use (i.e. not just incident impact).
The big advantage of being generic, is that when one come across a 'HIGH' anywhere that follows our levels, they know immediately and exactly what that means - even if it's a "work effort" and not a "risk impact" or if it's anything else really.
Here's an attempt to translate STIX to Mozilla standard levels as another example:
Mozilla Standard Level | STIX |
---|---|
UNKNOWN | Unknown |
LOW | Insignificant |
MEDIUM | Distracting |
HIGH | Painful, Damaging |
MAXIMUM | Catastrophic |
One thing I immediately notice is that both STIX's painful and damaging have.. almost the same definition ("somewhat serious" vs "serious in the long term").
I'm on the DBIR team and have been looking at what can be done to improve VERIS.
When it comes to impact (or likelihood) ratings, I'd recommend focusing on cross-correlation. I've found that everyone has 'names' for levels that work for them that they become rather attached to for whatever reason. The easiest solution is to have clear mappings rather than ask people to change their scores. The easiest way is to map all to a 0-100 scale, but direct mappings work as well.
For VERIS, I'd like to make sure that enumerations remain as objective as possible, preferably linked to business KPEs. Is Guillaume's mapping above an accurate mapping of the business impacts associated with 'low/med/high/maximum'?
@gdbassett our level ratings are documented at https://wiki.mozilla.org/Security/Standard_Levels#Risk_levels_definition_and_nomenclature (also "yes"). In particular they're not tied to specific financial impact as these are company dependent (and Mozilla's rather special in that case). You can find our process for tying that together here: https://wiki.mozilla.org/Security/Risk_management/Rapid_Risk_Assessment
In particular though, we have a scoring "engine" that takes multiple likelihood factors and a single impact factor (RRA). We're calling the multiple likelihood factors "data points" which are normalized to our low-max scale - and we're calculating risk from them on a resulting 0-100 scale. That risk score (0-100) is then mapped back to a normalized level as per the standard levels (so you get both the quick/simple "how risky do we estimate this is right now?" and a more granular 0-100 score) I think that's might be in fact implementing the same idea you're mentioning. In other words, the veris score from incidents become a low-max data point which is part of the formula resulting in a 0-100 + low-max score per service.
While we haven't documented the risk calculation part it'll come up eventually - we're still playing with the data at this point though :) (also this part has more/mostly code attached to it)
See also https://bugzilla.mozilla.org/show_bug.cgi?id=1120558 In Mozdef's incident UI (/incident/) there are tags such as impact.loss.rating.{Major,Moderate,Minor,None}.
I wondered if it would make sense to use maximum,high,medium,low instead. Same for confidence tags.
See also https://wiki.mozilla.org/Security/Standard_Levels