How do we want to represent a denial of service attack on an Application due to denial of networking assets associated with it?
The current implementation, as of fafb61bb4ea042b08d4e73feae425d3b127a93ae, is that all networking assets associated with an Application need to be denied for it to propagate to the Application itself. This was done to make the behaviour more consistent with the comments in the code and previous commit messages. However, it was not meant to definitively resolve the question of how this problem should be addressed. The decision was made to defer the discussions for later and this issue will summarise the arguments presented so far.
The people who brought up points during the conversation and may serve as good references in the future when this is debated again were: @mathiasekstedt Victor @joarjox @skatsikeas @andrewbwm
Four broad perspectives were provided:
In order for an Application to be denied via networking assets all of them must denied. This is the current implementation. The idea behind this is that if the Application can communicate in any way via some networking elements it is still assumed to provide some functionality, therefore it cannot be defined as denied.
However, in some circumstances an Application that is unable to communicate via networking assets can still function normally if its duties do not require networking capabilities. In those situations the deny would still be incorrectly flagged using this implementation.
If any of the networking assets connected to an Application are denied then the Application is denied. The reasoning behind this approach is that all of the networking assets included are assumed to be necessary for the proper functioning of the Application. It is less common to add redundant networking in a model than it is to have the critical networking elements.
This approach has the additional problem that it would frequently mask the denial of service initiating from other sources. It is quite common for an attacker to reach an Application via NetworkConnect which would very often come via a networking asset that could be denied by the attacker. This would lead to the deny step to be triggered on most of the Applications that the attacker can reach via networking assets.
A more nuanced implementation could define some networking associations as redundant and others as critical. Based on these associations denial of service would propagate accordingly. This implementation is likely to be somewhat similar to how IcsSystems function in icsLang(https://github.com/mal-lang/icsLang/pull/3). This solution would require a significant redesign of how networking asset associations and would require more attention from the modeller.
Never deny an Application via networking logic. An application can only be denied via attacker actions on the Application itself and not associated elements. This means that a modeller would have to interpret the results themselves and determine that due to network disruptions the Application cannot provide its functionality.
Another concern these conversations unearthed was that the current coreLang implementation may trigger denial of service on Networking assets too easily. If an attacker is able to get access(either fullAccess or specificAccess) on an Application that is association via inbound connections to networking assets those assets are trivially denied.
How do we want to represent a denial of service attack on an
Application
due to denial of networking assets associated with it?The current implementation, as of fafb61bb4ea042b08d4e73feae425d3b127a93ae, is that all networking assets associated with an Application need to be denied for it to propagate to the Application itself. This was done to make the behaviour more consistent with the comments in the code and previous commit messages. However, it was not meant to definitively resolve the question of how this problem should be addressed. The decision was made to defer the discussions for later and this issue will summarise the arguments presented so far.
The people who brought up points during the conversation and may serve as good references in the future when this is debated again were: @mathiasekstedt Victor @joarjox @skatsikeas @andrewbwm
Four broad perspectives were provided:
However, in some circumstances an Application that is unable to communicate via networking assets can still function normally if its duties do not require networking capabilities. In those situations the deny would still be incorrectly flagged using this implementation.
This approach has the additional problem that it would frequently mask the denial of service initiating from other sources. It is quite common for an attacker to reach an Application via
NetworkConnect
which would very often come via a networking asset that could be denied by the attacker. This would lead to the deny step to be triggered on most of the Applications that the attacker can reach via networking assets.A more nuanced implementation could define some networking associations as redundant and others as critical. Based on these associations denial of service would propagate accordingly. This implementation is likely to be somewhat similar to how IcsSystems function in icsLang(https://github.com/mal-lang/icsLang/pull/3). This solution would require a significant redesign of how networking asset associations and would require more attention from the modeller.
Never deny an Application via networking logic. An application can only be denied via attacker actions on the Application itself and not associated elements. This means that a modeller would have to interpret the results themselves and determine that due to network disruptions the Application cannot provide its functionality.
Another concern these conversations unearthed was that the current coreLang implementation may trigger denial of service on Networking assets too easily. If an attacker is able to get access(either
fullAccess
orspecificAccess
) on an Application that is association via inbound connections to networking assets those assets are trivially denied.