networked-systems-iith / AdaFlow

AdaFlow: An Efficient In-Network Cache for Intrusion Detection using Programmable Data Planes
MIT License
0 stars 1 forks source link

Security Analysis #13

Closed Sankalp-CS21MTECH12010 closed 1 year ago

Sankalp-CS21MTECH12010 commented 1 year ago

Reviewer Comments: Evasion: There currently seem to be very straight forward ways in which to evade the system, e.g. the flow eviction mechanism prioritized the first malicious flow. All subsequent malicious flows that hash into the same bucket get ignored. Similarly, each of the two timers seem to offer additional opportunities for malicious flows to remain undetected. The assumption here needs to be that an adversary knows the system that is being deployed and thus could easily circumvent detection. This not to be at least discussed in more detail.

The security analysis can be more systematic. Generally, data-plane telemetry systems could be vulnerable to the types of attacks (e.g., disruption and evasion) illustrated in the paper “Data-Plane security applications in adversarial settings”, and I would like to see a more organized discussion on how AdaFlow handles those attacks.

I invite the authors to come up with realistic threat models (potentially) by integrating some economical considerations, and then assess the impact of the corresponding attacks. See [E, D, J] for some relevant references on this subject.

My impression is that the paper is claiming that AdaFlow is "invulnerable" (or that it is trivial to make it as such), which is something that I strongly disagree with. Do note that it is acceptable if AdaFlow is shown to be vulnerable to some attacks (see, e.g., [K])

[D]: Arp, Daniel, et al. "Dos and Don'ts of Machine Learning in Computer Security." 31st USENIX Security Symposium (USENIX Security 22). 2022.

[E]: Apruzzese, Giovanni, et al. "Modeling realistic adversarial attacks against network intrusion detection systems." Digital Threats: Research and Practice (DTRAP) 3.3 (2022): 1-19.

[J]: Apruzzese, Giovanni, et al. "Position:“Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice." IEEE Conference on Secure and Trustworthy Machine Learning. IEEE, 2022.

[K]: Aghakhani, Hojjat, et al. "When malware is packin'heat; limits of machine learning classifiers based on static analysis features." Network and Distributed Systems Security (NDSS) Symposium 2020. 2020.

To-Do: Addressing some of the comments in revised version of the paper.

harshith-kotha5084 commented 1 year ago

Have gone through the paper 'Data-Plane security applications in adversarial settings'. Please find the link to the doc highlighting the important points specified in the paper.

in the 'Design Pitfalls' section, all the possible attacks on data-plane applications are specified. next, the paper studies the security mechanisms of 6 applications and discusses their vulnerabilities to adversarial inputs, and highlights the need for the development of applications considering security threats and mitigation strategies.

link to the doc: https://docs.google.com/document/d/1dJBHFdUWwX4uOqThQ4hchfMmRTf_e1DC-eEBI5_Wccs/edit?usp=sharing

Sankalp-CS21MTECH12010 commented 1 year ago

@ANANDKRISHNAM Can u please get idea about papers: [D] to [K]. And think how security analysis part of AdaFlow can benefit from these.

harshith-kotha5084 commented 1 year ago

PFA link to the security analysis doc.

link: https://docs.google.com/document/d/1k1PBf2a_iJ7i13FYjWLRlWXw12dmT4HH2X8i0Fmx1RM/edit?usp=sharing

The first section mentions all the possible security threats mentioned in the paper 'Data-Plane security applications in adversarial settings' for in-network applications and their detailed analysis. The second section includes all the possible attacks on AdaFlow via different parts of the design and highlights the mitigation techniques.

Sankalp-CS21MTECH12010 commented 1 year ago

@harshith-kotha5084 Thanks will go through it!