cmu-sei / SEER

SEER is a platform for assessing the performance of cybersecurity training and exercise participants.
7 stars 0 forks source link

SEER: System (for) Event Evaluation Research

Project Background

SEER automates the collection and processing of training and exercise data — from participant-entered incident response case reports to collaborative chat to other range systems — to provide detailed assessment-related reports on team and individual performance.

By providing qualitative & quantitative analysis of performance and removing subjectivity, its results enable the refinement of regular best practices and subsequent adaptation to T&R standards. SEER assists in the identification of high-performing units within training and exercise. It is a step to remove "game-isms" within an assessment by enabling participants to self-report their observations and subsequent activity.

Overview

SEER, in combination with an IR platform (TheHive) and Communications App (Mattermost), can capture all three essential data points in real time and provide reports on and comparisons between teams exercising under the same scenarios. SEER collects all data from TheHive and Mattermost and maps messages and actions to associated teams and users, it also tracks progress of Incident Response for each scheduled inject (incident) within the exercise. From this, SEER produces individual and team reports on the actions taken within the exercise, and provides timelines of the IR process for each inject.

Team Assessment Challenges

The ideal assessment would involve analyzing every step of a unit’s process for incident management — for Defensive Cyber Operations (DCO), this is traditionally identification, mitigation, quarantine, etc. — the timing related to their action, and their lines of communication as they operate. These requirements have been hard to capture with traditional assessment systems.

There are many stakeholders with distinct needs within exercise and training, including:

Problem Statement

The SEER project aspires to help solve some of the perennial challenges in evaluating individual and team performance within a cyber exercise:

  1. Clearly identify high performers
  2. Conduct qualitative & quantitative analysis on indicators/drivers of that high performance
  3. Use collected data to establish regular best-practice refinement in training and exercise standards
  4. Determine why high performing individuals and teams did better
  5. Survey a team's organizational characteristics, composition, and task performance
  6. Analyze indicators/drivers of high performance over time, including:
    • Organizational characteristics
    • Team composition
    • Team & individual task performance

As part of ongoing evaluation, SEER seeks to answer assessment-related questions such as the following:

Typical Workflow

  1. Definition of training objectives — an important step in establishing what is to be assessed — these objectives drive the design of the exercise scenario
  2. Training objectives are mapped to scenario events (aka "injects")
  3. Admins/OPFOR design exercise scenario along a timeline, including individual injects with each step mapped to MITRE ATT&CK
    • Injects are evolved from our internal inject catalog of hundreds of exploits
  4. Admin/OPFOR clicks button in SEER to add event to MISP with default mapped info, which includes necessary tags for when it comes back to SEER. (MISP also provides HHQ intel on potential threats)
  5. MISP automatically updates HIVE.
  6. As the assessed team records operational notes from within The Hive on activity they are seeing, SEER consumes this data and processes it to make an effective assessment

Next Steps

Future projects include the following:

Leverage Framework of Frameworks

We are looking to integrate popular frameworks such as: