Tongzhou0101 / NNSplitter

This is the official implementation of NNSplitter (ICML'23)
MIT License
10 stars 1 forks source link

How is TEE simulated in this project ? #2

Open mayank64ce opened 2 weeks ago

mayank64ce commented 2 weeks ago

In the paper, I read that the TEE implementation scheme is taken from ShadowNet (Sun. et. al 2023).

I was wondering where is it implemented in the code ?

Thank you.

Tongzhou0101 commented 2 weeks ago

Thank you for your question.

This work focuses on the algorithm design itself and does not include the TEE implementation in the provided code. We referenced ShadowNet to direct readers to their work for details on hardware implementation. Our method is designed to be compatible with TEE usage as described in ShadowNet.

I hope this clarifies your question.

mayank64ce commented 2 weeks ago

Yes, it does for the most part. Thanks for that.

In your codebase, can you please point me to the parts which are supposed to run on TEE and GPU respectively ?

Tongzhou0101 commented 2 weeks ago

The code generates two parts that should executed on GPU and TEE, respectively.

For example, in line 150 of main_cifar.py, the model with weights net_dict_new should be stored and run on GPU, and the selected convolutional kernels with weights new_w-ori_w should be run on TEE.