Limmen / gym-optimal-intrusion-response

A Simulated Optimal Intrusion Response Game
Creative Commons Attribution Share Alike 4.0 International
21 stars 5 forks source link

Optimal Intrusion Response

An OpenAI Gym interface to a MDP/Markov Game model for optimal intrusion response of a realistic infrastructure simulated using system traces.

Included Environments

Requirements

Installation

# install from pip
pip install gym-optimal-intrusion-response==1.0.0
# local install from source
$ pip install -e gym-optimal-intrusion-response
# force upgrade deps
$ pip install -e gym-optimal-intrusion-response --upgrade

# git clone and install from source
git clone https://github.com/Limmen/gym-optimal-intrusion-response
cd gym-optimal-intrusion-response
pip3 install -e .

Usage

The environment can be accessed like any other OpenAI environment with gym.make. Once the environment has been created, the API functions step(), reset(), render(), and close() can be used to train any RL algorithm of your preference.

import gym
from gym_idsgame.envs import IdsGameEnv
env_name = "optimal-intrusion-response-v1"
env = gym.make(env_name)

Infrastructure

Traces

Alert/login traces from the emulated infrastructure are available in (./traces).

Publications

@INPROCEEDINGS{hammar_stadler_cnsm_21,
AUTHOR="Kim Hammar and Rolf Stadler",
TITLE="Learning Intrusion Prevention Policies through Optimal Stopping",
BOOKTITLE="International Conference on Network and Service Management (CNSM 2021)",
ADDRESS="Izmir, Turkey",
DAYS=1,
YEAR=2021,
note={\url{http://dl.ifip.org/db/conf/cnsm/cnsm2021/1570732932.pdf}},
KEYWORDS="Network Security, automation, optimal stopping, reinforcement learning, Markov Decision Processes",
ABSTRACT="We study automated intrusion prevention using reinforcement learning. In a novel approach, we formulate the problem of intrusion prevention as an optimal stopping problem. This formulation allows us insight into the structure of the optimal policies, which turn out to be threshold based. Since the computation of the optimal defender policy using dynamic programming is not feasible for practical cases, we approximate the optimal policy through reinforcement learning in a simulation environment. To define the dynamics of the simulation, we emulate the target infrastructure and collect measurements. Our evaluations show that the learned policies are close to optimal and that they indeed can be expressed using thresholds."
}
@misc{hammar2021intrusion,
      title={Intrusion Prevention through Optimal Stopping},
      author={Kim Hammar and Rolf Stadler},
      year={2021},
      eprint={2111.00289},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

See also

Author & Maintainer

Kim Hammar kimham@kth.se

Copyright and license

LICENSE

Creative Commons

(C) 2021, Kim Hammar