The TTCP CAGE Challenges are a series of public challenges instigated to foster the development of autonomous cyber defensive agents. This CAGE Challenge 4 (CC4) returns to a defence industry enterprise environment, and introduces a Multi-Agent Reinforcement Learning (MARL) scenario.
When the Monitor action is executed, it looks at the children of the blue agent's VelociraptorServer when determining if events should be added to the observation. However, the VelociraptorServer itself also produces events. As the server is not a child of itself, it is never checked. As a result, two things occur:
The events are never added to the observation.
The events are never cleared from the host, causing the corresponding event bit in the observation vector to become permanently stuck as 1.
A simple solution would be to iterate over state.sessions[self.agent].values() instead of state.sessions[self.agent][self.session].children.values() on line 51 of Monitor.py, and adding an additional check before that line to ensure that session is a VelociraptorServer.
When the Monitor action is executed, it looks at the children of the blue agent's VelociraptorServer when determining if events should be added to the observation. However, the VelociraptorServer itself also produces events. As the server is not a child of itself, it is never checked. As a result, two things occur:
A simple solution would be to iterate over
state.sessions[self.agent].values()
instead ofstate.sessions[self.agent][self.session].children.values()
on line 51 of Monitor.py, and adding an additional check before that line to ensure thatsession
is aVelociraptorServer
.