Closed noureddine-as closed 4 years ago
Hi,
Thank you for your report, it has been fixed in the latest commits. 300K is the total number of trainable parameters in the network of the cifar-10 model.
Regarding the int8 export, I updated the model to be quantization friendly and it should work as well (with such a simple model, there is no accuracy drop in int8 compared to floating point).
Cheers, Olivier
Thank you very much!
Hello, First thank you very much for creating and supporting this beautiful piece of engineering!
I was experimenting with tool, I went throught the mnist example cited in the documentation and it worked fine for several export options that tested (C int8, float32, float64), and the success rate was very high.
However, for CIFAR-10, I got the following success rate result for
float32
C exportWheras, for
float64
C export the success rate was so surprisingly lowThe training was performed on a Google Cloud Platform NVidia Tesla K80 with the following arguments:
Plus performing a test shows a success rate of83.79%
(I also tried to export int8 to see if the same problem persists, but I got the same runtime_error mentioned here https://github.com/CEA-LIST/N2D2/issues/57 . However, since I don't know much about DNN design, so I didn't quite understand what should be changed)
Best regards, Noureddine.