Closed nremezov closed 4 years ago
I did additional tests. There is chance that TCap and POA values are "swapped" with each other. When I increased TCap vulnerability value lowered.
@nremezov , thank you for the kind words and the bug report.
I will take a look at this at this tonight and try to pinpoint the origin.
@nremezov , your suspicions were correct; thanks again for the detailed report.
Can you use pip to update your install to 0.1-alpha.9, rerun your example, and close this bug if it is fixed?
Your examples should now yield 55 (formerly 45) for the first example and 42 (formerly 58) for the second example ... which gives you a lower V when supplied a higher CS value as you would expect.
Within ModelCalc._calculate_step_average() there was a statement:
bool_series = child_1_data > child_2_data
Control Strength was given as child_1_data and Threat Capability was given as child_2_data.
When converted, this gave a "1" when Control Strength was greater than Threat Capability, and a "0" when Threat Capability was greater than Control Strength. This is precisely the opposite of what should occurred. A "1" value should occur when the Threat overwhelms (i.e. is larger than) the Control ... which in turn leads to higher Vulnerability values.
This statement has since been changed to:
bool_series = child_1_data < child_2_data
... which is a bit embarrassing but I suppose that's what alphas are for.
Would you please update your version by running:
pip install pyfair --upgrade
... and close this report if this fixes your problem?
Good morning. In short - yes looks like bug is fixed.
I've upgraded PyFair by running pip install pyfair --upgrade
Installed pyfair version is - 0.1a9
I've run 2 models above and results for V were 55 and 42 as you've said. I've run other models and V changes look consistent.
Big Thanks!
Hello I'm running PyFair 0.1a8 I think it could be an issue with vulnerability calculation, when it is derived from PoA and TCAP. Here are two models and results:
Model 1 - CS is lower than TC model3 = pyfair.FairModel(name="Example Model 2", n_simulations=30000) model3.input_data('Contact', low=200, mode=1000, high=3000) model3.input_data('Action', low=0.85, mode=0.95, high=1) model3.input_data('Threat Capability', low=0.6, mode=0.85, high=0.98) model3.input_data('Control Strength', low=0.59, mode=0.84, high=0.97) model3.input_data('Secondary Loss Event Frequency', low=0.5, mode=0.85, high=1) model3.input_data('Secondary Loss Event Magnitude', low=5000, mode=10000, high=20000) model3.input_data('Primary Loss', low=15000, mode=25000, high=50000)
Vulnerability 1 Result Vulnerability value is 0.45
Model 2 - CS is higher than TC model3 = pyfair.FairModel(name="Example Model 2", n_simulations=30000) model3.input_data('Contact', low=200, mode=1000, high=3000) model3.input_data('Action', low=0.85, mode=0.95, high=1) model3.input_data('Threat Capability', low=0.6, mode=0.85, high=0.98) model3.input_data('Control Strength', low=0.63, mode=0.87, high=0.99) model3.input_data('Secondary Loss Event Frequency', low=0.5, mode=0.85, high=1) model3.input_data('Secondary Loss Event Magnitude', low=5000, mode=10000, high=20000) model3.input_data('Primary Loss', low=15000, mode=25000, high=50000) model3.calculate_all()
Vulnerability 2 Result Vulnerability value is 0.58.
It seem like a mistake to me. Because as we increase Control Strength - Vulnerability value should decrease.
P.S. Great application. And documentation is better than official one (to me).