mlcommons / tiny

MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
https://mlcommons.org/en/groups/inference-tiny/
Apache License 2.0
331 stars 81 forks source link

Loss is negative and accuracy=0.006 when tried to prune anamoly detection #118

Open MounikaVaddeboina opened 2 years ago

MounikaVaddeboina commented 2 years ago

I have used Tensorflow optimization toolkit to prune the benchmark anamoly_detection.

The procedure I have followed is same as shown in the below link. https://www.tensorflow.org/model_optimization/guide/combine/pqat_example

The output during training is like this: Epoch 2/100 2412/2412 [==============================] - 20s 8ms/step - loss: 11.1539 - val_loss: 11.1307 Epoch 3/100 2412/2412 [==============================] - 20s 8ms/step - loss: 10.6982 - val_loss: 10.6691 Epoch 4/100 2412/2412 [==============================] - 20s 8ms/step - loss: 10.4117 - val_loss: 10.5804 Epoch 5/100 2412/2412 [==============================] - 20s 8ms/step - loss: 10.2858 - val_loss: 10.2876 Epoch 6/100 2412/2412 [==============================] - 20s 8ms/step - loss: 10.1822 - val_loss: 10.2884 Epoch 7/100 2412/2412 [==============================] - 20s 8ms/step - loss: 10.1250 - val_loss: 10.2690 Epoch 8/100 2412/2412 [==============================] - 20s 8ms/step - loss: 10.0805 - val_loss: 10.3325

Output during pruning is like this: Epoch 1/100 2680/2680 [==============================] - 47s 16ms/step - loss: -291053.6562 - accuracy: 0.0359 Epoch 2/100 2680/2680 [==============================] - 42s 16ms/step - loss: -287242.3438 - accuracy: 0.0335 Epoch 3/100 2680/2680 [==============================] - 42s 16ms/step - loss: -294022.2500 - accuracy: 0.0341 Epoch 4/100 2680/2680 [==============================] - 41s 15ms/step - loss: -301931.7188 - accuracy: 0.0336 Epoch 5/100 2680/2680 [==============================] - 41s 15ms/step - loss: -311050.6875 - accuracy: 0.0294 Epoch 6/100 2680/2680 [==============================] - 42s 15ms/step - loss: -321004.3750 - accuracy: 0.0241 Epoch 7/100 2680/2680 [==============================] - 41s 15ms/step - loss: -331941.9375 - accuracy: 0.0177 Epoch 8/100 2680/2680 [==============================] - 42s 16ms/step - loss: -343348.4688 - accuracy: 0.0109 Epoch 9/100 2680/2680 [==============================] - 42s 16ms/step - loss: -355611.3438 - accuracy: 0.0080 Epoch 10/100 2680/2680 [==============================] - 42s 16ms/step - loss: -368586.7812 - accuracy: 0.0073 Epoch 11/100 2680/2680 [==============================] - 42s 16ms/step - loss: -382228.8125 - accuracy: 0.0068 Epoch 12/100 2680/2680 [==============================] - 42s 16ms/step - loss: -396485.6875 - accuracy: 0.0066 Epoch 13/100 2680/2680 [==============================] - 42s 16ms/step - loss: -411286.4062 - accuracy: 0.0065 Epoch 14/100 2680/2680 [==============================] - 41s 15ms/step - loss: -426653.8750 - accuracy: 0.0061 Epoch 15/100 2680/2680 [==============================] - 41s 15ms/step - loss: -442495.9062 - accuracy: 0.0056 Epoch 16/100 2680/2680 [==============================] - 41s 15ms/step - loss: -458785.0625 - accuracy: 0.0049

Can you help me with this issue?

cskiraly commented 2 years ago

@MounikaVaddeboina I'm sorry but we haven't tried pruning so far, so it is difficult to say what could go wrong. If you fork the repo and commit your modified code in a branch, someone who already tried pruning the model might be able to spot the problem.