lmartak / distill-nn-tree

Distillation of Neural Network Into a Soft Decision Tree
https://vgg.fiit.stuba.sk/people/martak/distill-nn-tree
MIT License
64 stars 29 forks source link

Results are not reproducible #6

Open Songyima opened 3 years ago

Songyima commented 3 years ago

Hi, I ran your code on my own server, the results are: [No distill] 10000/10000 [==============================] - 8s 783us/sample - loss: 7.5134 - acc: 0.9085accuracy: 90.85% | loss: 7.513414146804809 10000/10000 [==============================] - 8s 785us/sample - loss: 7.5161 - acc: 0.9032 accuracy: 90.32% | loss: 7.516079863357544

[distill with soft target] Saving trained model to assets/distilled/tree-model. 10000/10000 [==============================] - 7s 711us/sample - loss: 7.7522 - acc: 0.8254 accuracy: 82.54% | loss: 7.7521795679092405 10000/10000 [==============================] - 8s 758us/sample - loss: 7.7434 - acc: 0.8189 accuracy: 81.89% | loss: 7.7433844789505

Songyima commented 3 years ago

I also modify the code from https://github.com/kimhc6028/soft-decision-tree Surprisingly, with depth 4, the results are the same. I.e., before distill 90%, after distill 80%.