CEA-LIST / N2D2

N2D2 is an open source CAD framework for Deep Neural Network simulation and full DNN-based applications building.
Other
146 stars 35 forks source link

float64 results in an extremely lower success rate compared to float32 for cifar-10 #62

Closed noureddine-as closed 4 years ago

noureddine-as commented 4 years ago

Hello, First thank you very much for creating and supporting this beautiful piece of engineering!

I was experimenting with tool, I went throught the mnist example cited in the documentation and it worked fine for several export options that tested (C int8, float32, float64), and the success rate was very high.

However, for CIFAR-10, I got the following success rate result for float32 C export

$ n2d2 "$N2D2_MODELS/cifar-10.ini" -export C -nbbits -32 -calib -1
...
Tested 10000 stimuli
Success rate = 83.750000%
Process time per stimulus = 12687.039700 us (12 threads)

Wheras, for float64 C export the success rate was so surprisingly low

$ n2d2 "$N2D2_MODELS/cifar-10.ini" -export C -nbbits -64 -calib -1
...
Tested 10000 stimuli
Success rate = 10.000000%
Process time per stimulus = 13886.577900 us (12 threads)

The training was performed on a Google Cloud Platform NVidia Tesla K80 with the following arguments:

n2d2 "$N2D2_MODELS/cifar-10.ini" -learn 5000000 -log 50000

Plus performing a test shows a success rate of83.79%

Final recognition rate: 83.79%    (error rate: 16.21%)
    Sensitivity: 83.79% / Specificity: 98.20% / Precision: 83.91%
    Accuracy: 96.76% / F1-score: 83.70% / Informedness: 81.99%

(I also tried to export int8 to see if the same problem persists, but I got the same runtime_error mentioned here https://github.com/CEA-LIST/N2D2/issues/57 . However, since I don't know much about DNN design, so I didn't quite understand what should be changed)

Best regards, Noureddine.

olivierbichler-cea commented 4 years ago

Hi,

Thank you for your report, it has been fixed in the latest commits. 300K is the total number of trainable parameters in the network of the cifar-10 model.

Regarding the int8 export, I updated the model to be quantization friendly and it should work as well (with such a simple model, there is no accuracy drop in int8 compared to floating point).

Cheers, Olivier

noureddine-as commented 4 years ago

Thank you very much!