mofanv / darknetz

runs several layers of a deep learning model in TrustZone
MIT License
86 stars 29 forks source link

Inference problem #13

Closed Mangoiscool closed 4 years ago

Mangoiscool commented 4 years ago

Hi mofanv,

I am running a simulation on QEMU-v8 with optee. I got the problem the same as this issue. And I fixed it by decreasing the required memory size (TA_DATA_SIZE).

I used the following command in README.md: darknetp classifier predict -pp 4 cfg/mnist.dataset cfg/mnist_lenet.cfg models/mnist/mnist_lenet.weights data/mnist/images/t_00007_c3.png

But my results is NOT like this:

100.00%: 3
 0.00%: 1
 0.00%: 2
 0.00%: 0
 0.00%: 4

I got:

 0.00%: 3
 0.00%: 1
 0.00%: 2
 0.00%: 0
 0.00%: 4

Is there a problem?

mofanv commented 4 years ago

Hi @Mangoymd , I have also experienced a similar unexpected output of inference when changing to another device (RPi 3 or Hikey). This may be caused by the different behavior of the decryption key when the DNN is loaded into the TEE on different devices. I will fix this in the next update. You can try to train a model from the beginning with a -pp value, and then do inference using this trained model, which could solve this problem.

Mangoiscool commented 4 years ago

@mofanv, I got it. Thanks!