(part of this thread on OWASP's Slack Threat model channel )
Great thread, here are the techniques I use to scale threat modeling:
1) Security Champions (one per team) own the threat model (help to create and maintain it)
2) Central AppSec team reviews those threat models, helps to find blind spots and (if the Security Champion and dev team don't have experience in doing them) help with the first couple
3) perform threat model per feature and pattern (usually aligned with user stories or app's capabilities). This is key to keep the threat models within a sane size and allow good zoom in into what is really going on (Threat Models are then chained to give a full picture)
4) Risks identified in Threat Model are mapped in the JIRA Risk Workflow (as described in https://github.com/DinisCruz/Book_SecDevOps_Risk_Workflow ), with risks ranging from highly technical, to architectural to business (it is key that they all all linked together, so that they create a 'body of proof')
5) Threat models are required for each new feature (should be created and updated as part of the dev's Sprints). This is part of the AppSec policy and really provides a mandate for teams to create threat models proactively
6) Threat Models are available on a central wiki, and are seen as sources of truth (i.e. the most up-to-date documentation)
7) Avoid closed or complex diagram tools, ideally using 'text based diagrams' (plantUML, dot) or tools that integrate with wiki (like Draw.io with Confluence)
8) Threat models are a created by doing: Diagrams (DfT or Use cases) + STRIDE questions
9) Threat Models are given to pentesters whose job is seen as "find blind spots and prove (or not) the threats/assumptions/countermeasures identified". Risks identified must have a high degree of provability (in 'real world' scenarios). FUD should be avoided as much as possible, "It is not possible (lack of time or knowledge) to understand the real impact of this Risk" is a Risk in it self
10) once this is all in place, it is a BIG problem if any (new) vulnerability is discovered in QA, pentest or exploited on live system, which by nature of being 'new', means that is NOT currently identified in a threat model (with its Risk accepted). This represents a gap in the current thinking, knowledge, workflow and understanding of what is really going on. Root cause analysis and lessons learned are needed.
(part of this thread on OWASP's Slack Threat model channel )
Great thread, here are the techniques I use to scale threat modeling:
1) Security Champions (one per team) own the threat model (help to create and maintain it) 2) Central AppSec team reviews those threat models, helps to find blind spots and (if the Security Champion and dev team don't have experience in doing them) help with the first couple 3) perform threat model per feature and pattern (usually aligned with user stories or app's capabilities). This is key to keep the threat models within a sane size and allow good zoom in into what is really going on (Threat Models are then chained to give a full picture) 4) Risks identified in Threat Model are mapped in the JIRA Risk Workflow (as described in https://github.com/DinisCruz/Book_SecDevOps_Risk_Workflow ), with risks ranging from highly technical, to architectural to business (it is key that they all all linked together, so that they create a 'body of proof') 5) Threat models are required for each new feature (should be created and updated as part of the dev's Sprints). This is part of the AppSec policy and really provides a mandate for teams to create threat models proactively 6) Threat Models are available on a central wiki, and are seen as sources of truth (i.e. the most up-to-date documentation) 7) Avoid closed or complex diagram tools, ideally using 'text based diagrams' (plantUML, dot) or tools that integrate with wiki (like Draw.io with Confluence) 8) Threat models are a created by doing: Diagrams (DfT or Use cases) + STRIDE questions 9) Threat Models are given to pentesters whose job is seen as "find blind spots and prove (or not) the threats/assumptions/countermeasures identified". Risks identified must have a high degree of provability (in 'real world' scenarios). FUD should be avoided as much as possible, "It is not possible (lack of time or knowledge) to understand the real impact of this Risk" is a Risk in it self 10) once this is all in place, it is a BIG problem if any (new) vulnerability is discovered in QA, pentest or exploited on live system, which by nature of being 'new', means that is NOT currently identified in a threat model (with its Risk accepted). This represents a gap in the current thinking, knowledge, workflow and understanding of what is really going on. Root cause analysis and lessons learned are needed.