NervanaSystems / neon

Intel® Nervana™ reference deep learning framework committed to best performance on all hardware
http://neon.nervanasys.com/docs/latest
Apache License 2.0
3.87k stars 811 forks source link

why num_batches changed ? #388

Open guoxuesong opened 7 years ago

guoxuesong commented 7 years ago
$ python mnist_mlp.py -z 512

Epoch 0   [Train |████████████████████|  118/118  batches, 0.41 cost, 0.93s]
Epoch 1   [Train |████████████████████|  117/117  batches, 0.29 cost, 0.92s]
Epoch 2   [Train |████████████████████|  117/117  batches, 0.23 cost, 0.93s]
Epoch 3   [Train |████████████████████|  117/117  batches, 0.19 cost, 0.93s]
Epoch 4   [Train |████████████████████|  117/117  batches, 0.18 cost, 0.92s]
Epoch 5   [Train |████████████████████|  118/118  batches, 0.16 cost, 0.93s]
Epoch 6   [Train |████████████████████|  117/117  batches, 0.15 cost, 0.93s]
Epoch 7   [Train |████████████████████|  117/117  batches, 0.14 cost, 0.93s]
Epoch 8   [Train |████████████████████|  117/117  batches, 0.13 cost, 0.92s]
Epoch 9   [Train |████████████████████|  117/117  batches, 0.12 cost, 0.92s]
2017-07-28 05:55:25,319 - neon - DISPLAY - Misclassification error = 2.8%

It's interesting, epoch 0 and epoch 5 run 118 batches, the others run 117 batches. Looks like two branches of code decide num_batches in different way.

wei-v-wang commented 7 years ago

Thanks for reporting this! We will be fixing it in one of the future releases.