newaetech / chipwhisperer-contest-2021

5 stars 0 forks source link

Attacking Neural Processing Units with Side-Channel Analysis #9

Open sasalamol opened 2 years ago

sasalamol commented 2 years ago

Introduction

Through side-channel analysis (SCA) [1], a class of attacks exploiting physical characteristics of electronic devices, an attacker can retrieve information on internal values a device is handling. When this data is sensitive, for instance when it comprises cryptographic key material in an embedded cryptosystem or weights in a trained machine learning model deployed for inference, appropriate countermeasures should be applied before deployment. SCA attacks and their countermeasures are well studied in the field of cryptographic engineering and consist of power analysis attacks that require access to the power circuit, and electro-magnetic (EM) attacks, which can be performed non-invasively and have a higher locality.

When a device loaded with a trained machine learning model is accessible to an attacker, as can occur when it is deployed for inference in the edge, the architecture and weights of the model can be extracted through SCA. The obtained information can either be copied, resulting in a loss of expensive intellectual property, or be used subsequently to help create so-called adversarial attacks [2].

Although SCA attacks have recently been shown to succeed on various devices for inference [3, 4] and researchers have started applying known countermeasures from the realm of cryptography to neural net systems [5], this area remains a nascent field of study with many alleys for impactful research. Furthermore, the proliferation of edge devices for inference, e.g. Google's Edge TPU [6], Xilinx's Versal AI Edge [7] and Nvidia's Jetson Nano [8], establishes a clear relevance for the study of their vulnerability against SCA attacks for both industry and academia.

Proposal

The goal of this project is to investigate the vulnerability of neural processing units against electromagnetic side-channel attacks. To this end, the student will first analyse architectures of several AI edge devices and isolate promising building blocks to target with an EM attack. Then, the student will implement these building blocks on an FPGA and perform actual EM analysis. When a successful attack is uncovered, the student can choose between several paths forward depending on his/her interest: either the attack is further developed or countermeasures are applied. In the former case, more blocks of a chosen architecture will be implemented and attacked on the FPGA, and with further refinement, a strategy will be thought out to ultimately perform the attack on its corresponding real-world device. In the latter case, a suitable countermeasure will be implemented and evaluated with EM analysis.

Motivation

The meaning and utility of data have evolved drastically throughout the years. When the value of a certain topic rises, the difficulty of safeguarding it arises. The motivation behind our work takes its shape by means of this securing problem. At the present time, the collected data requires not only storing but also needs to be interpreting. And this process is beyond one's power to be carried through because of its size and sensitiveness. Hereby, these tasks are inherited to artificial intelligence algorithms. Despite the fact that these issues appear to be primarily software-dependent, they must all be implemented in hardware contexts. Right at this point, our project is carried out to take the responsibility for contributing the security-enhancing studies by approaching from the dark side. This investigation is focused on Neural Processing Units and their physical security vulnerabilities. In the subject of cryptography engineering, physical security analyses/attacks, and their protective measures have been extensively researched. Although several approaches have recently been demonstrated to operate on a variety of inference units, and academics have begun to deploy proven protections from the domain of cryptography for neural network system applications, this is still a new field of study with numerous avenues for promising findings.

References

[1] Introduction to differential power analysis and related attacks by Kocher et al. https://www.rambus.com/wp-content/uploads/2015/08/DPATechInfo.pdf [2] Robust Physical-World Attacks on Deep Learning Visual Classification by Eykholt et al. https://arxiv.org/pdf/1707.08945.pdf [3] CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information by Batina et al. https://arxiv.org/pdf/1810.09076.pdf [4] Model-Extraction Attack Against FPGA-DNN Accelerator Utilizing Correlation Electromagnetic Analysis by Yoshida et al. https://ieeexplore.ieee.org/abstract/document/8735505 [5] MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection by Dubey et al. https://arxiv.org/pdf/1910.13063.pdf [6] Google Coral Edge TPU explained in depth. https://qengineering.eu/google-corals-tpu-explained.html [7] Versal AI Edge Series https://www.xilinx.com/products/silicon-devices/acap/versal-ai-edge.html [8] Introducing Jetson Xavier NX, the World’s Smallest AI Supercomputer https://developer.nvidia.com/blog/jetson-xavier-nx-the-worlds-smallest-ai-supercomputer/

colinoflynn commented 2 years ago

@sasalamol Can you drop a note to sales@newae.com with your contact email you prefer (and mention your issue # / github username)? With some delay we're finalizing the contest results now and realized GitHub doesn't allow us to message people here! Unfortunately this submission was after the official cut-off but would still like to provide any feedback!