Open kristovatlas opened 8 years ago
The JSON format could look something like this:
'attackers':
[
{
'name': 'attacker 1',
'weight': -1, //leave as -1 in order to be accumulated from severity benchmarks by docgen.py
'severity-benchmarks': [
{
'description': 'quantity of likely attackers',
'weight': 50, // an int between 0 and 100 where 0is "few" and "100" is "many"
'max-weight': 100,
'min-weight': 0
},
{
'description': 'temporal window of attacks',
'weight': 100 //an int between 0 and 100 where 0 is "short" and 100 is "long",
'max-weight': 100,
'min-weight': 0
}
],
'attacks': [
{
'name': 'attack 1',
'weight': -1,
'severity-benchmarks': [
{
'description': 'probability of attack success if unmitigated'
'weight': 100,
'max-weight': 100,
'min-weight': 0
},
{
'description': 'severity of information gained in successful attack',
'weight': 50,
'max-weight': 100,
'min-weight': 0
}
],
'countermeasures': [
{
'name': 'countermeasure 1',
'effectiveness': -1,
'severity-benchmarks': [
{
'description': 'likelihood of mitigation if completely implemented',
'score': 1.0,
'max-score': 1.0,
'min-score': 0.0,
'comment': 'countermeasure 1 would always be effective if completely implemented because blah blah blah'
},
{
'description': 'severity of information protected by countermeasure',
'score': 0.75,
'max-score': 1.0,
'min-score': 0.0,
'comment': 'countermeasure 1 would only protect 75% of the data lost by attack 1 because blah blah blah'
}
],
'criteria': [
'name': 'criterion 1',
'effectiveness': -1,
'severity-benchmarks': [
{
'description': 'thoroughness of implementing countermeasure',
'score': 0.60,
'max-score': 1.0,
'min-score': 0.0,
'comment': 'completion of criterion 1 indicates a 60% application of countermeasure 1 because blah blah blah'
}
]
]
}
]
}
]
}
]
If the above example were put through docgen.py, it would compute the following values:
the effective weight of all criteria in the threat model MIGHT add up to 100 total depending on whether our criteria allow for a wallet to do a perfect job of mitigating all attacks we list, but the sum of all criteria's effective weights would definitely not be higher than 100.
All of these severity benchmarks would simply be multiplied together to derive a final weight, effectiveness, or score. The next version of this project could include complex arithmetic relationships between the scores, but I don't think we need this for v1.0.0.
LGTM :+1:
A few changes to my proposed new format:
'attackers':
[
{
'name': 'attacker 1',
'min-weight': 0,
'max-weight': 100,
'severity-benchmarks': [
{
'description': 'likelihood of attack against average user',
'relationship': 'direct',
'weight': 50, // an int between 0 and 100 where 0is "few" and "100" is "many"
'max-weight': 100,
'min-weight': 0
},
{
'description': 'temporal window of attacks',
'relationship': 'direct',
'weight': 100 //an int between 0 and 100 where 0 is "short" and 100 is "long",
'max-weight': 100,
'min-weight': 0
}
],
'attacks': [
{
'name': 'attack 1',
'weight': -1,
'severity-benchmarks': [
{
'description': 'probability of attack success if unmitigated'
'relationship': 'direct',
'weight': 100,
'max-weight': 100,
'min-weight': 0
},
{
'description': 'severity of information gained in successful attack',
'relationship': 'direct',
'weight': 50,
'max-weight': 100,
'min-weight': 0
}
],
'countermeasures': [
{
'name': 'countermeasure 1',
'relationship': 'direct',
'severity-benchmarks': [
{
'description': 'likelihood of mitigation if completely implemented',
'relationship': 'direct',
'score': 1.0,
'max-score': 1.0,
'min-score': 0.0,
'comment': 'countermeasure 1 would always be effective if completely implemented because blah blah blah'
},
{
'description': 'severity of information protected by countermeasure',
'relationship': 'direct',
'score': 0.75,
'max-score': 1.0,
'min-score': 0.0,
'comment': 'countermeasure 1 would only protect 75% of the data lost by attack 1 because blah blah blah'
}
],
'criteria': [
'name': 'criterion 1',
'effectiveness': -1,
'severity-benchmarks': [
{
'description': 'thoroughness of implementing countermeasure',
'relationship': 'direct',
'score': 0.60,
'max-score': 1.0,
'min-score': 0.0,
'comment': 'completion of criterion 1 indicates a 60% application of countermeasure 1 because blah blah blah'
}
]
]
}
]
}
]
}
]
Currently scores are expressed in the JSON file as 'weights' and 'effectiveness' for attackers, attacks, countermeasures, and criteria with integer and float values, respectively.
I suggest that we instead accumulate these scores from a series of score sub criteria. In the OBPP v2 threat model, we refer to these as "acceptance criteria" -- rules of thumb for how we derive subjective values that compare various threat model elements -- but in the JSON format I propose we refer to them as "severity benchmarks" to avoid confusion with what we're currently calling criteria.