brjathu / deepcaps

Official Implementation of "DeepCaps: Going Deeper with Capsule Networks" paper (CVPR 2019).
MIT License
151 stars 48 forks source link

performance #19

Open Christinepan881 opened 4 years ago

Christinepan881 commented 4 years ago

I directly run the code, and only got 10% performance on train set of MNIST dataset. For the val set, the accuracy keeps staying in 0.0000e+00. Is there anything wrong? lol.

Epoch 00001: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00001: capsnet_accuracy improved from -inf to 0.10517, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 157s 672ms/step - loss: 0.5766 - capsnet_loss: 0.5524 - decoder_loss: 0.0605 - capsnet_accuracy: 0.1052 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 2/100 234/234 [==============================] - ETA: 0s - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1057
Epoch 00002: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00002: capsnet_accuracy improved from 0.10517 to 0.10573, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 668ms/step - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1057 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 3/100 234/234 [==============================] - ETA: 0s - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1057
Epoch 00003: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00003: capsnet_accuracy did not improve from 0.10573 234/234 [==============================] - 156s 666ms/step - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1057 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 4/100 234/234 [==============================] - ETA: 0s - loss: 0.5478 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1051
Epoch 00004: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00004: capsnet_accuracy did not improve from 0.10573 234/234 [==============================] - 156s 666ms/step - loss: 0.5478 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1051 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 5/100 234/234 [==============================] - ETA: 0s - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1068
Epoch 00005: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00005: capsnet_accuracy improved from 0.10573 to 0.10682, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 667ms/step - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1068 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 6/100 234/234 [==============================] - ETA: 0s - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1073
Epoch 00006: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00006: capsnet_accuracy improved from 0.10682 to 0.10732, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 666ms/step - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1073 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 7/100 234/234 [==============================] - ETA: 0s - loss: 0.5476 - capsnet_loss: 0.5240 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1080
Epoch 00007: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00007: capsnet_accuracy improved from 0.10732 to 0.10803, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 667ms/step - loss: 0.5476 - capsnet_loss: 0.5240 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1080 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 8/100 234/234 [==============================] - ETA: 0s - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1092
Epoch 00008: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00008: capsnet_accuracy improved from 0.10803 to 0.10917, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 666ms/step - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1092 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 9/100 234/234 [==============================] - ETA: 0s - loss: 0.5475 - capsnet_loss: 0.5238 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1059
Epoch 00009: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00009: capsnet_accuracy did not improve from 0.10917 234/234 [==============================] - 156s 667ms/step - loss: 0.5475 - capsnet_loss: 0.5238 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1059 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 10/100 234/234 [==============================] - ETA: 0s - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1072
Epoch 00010: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00010: capsnet_accuracy did not improve from 0.10917 234/234 [==============================] - 156s 666ms/step - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1072 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 11/100 234/234 [==============================] - ETA: 0s - loss: 0.5472 - capsnet_loss: 0.5236 - decoder_loss: 0.0589 - capsnet_accuracy: 0.1115
Epoch 00011: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00011: capsnet_accuracy improved from 0.10917 to 0.11146, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 668ms/step - loss: 0.5472 - capsnet_loss: 0.5236 - decoder_loss: 0.0589 - capsnet_accuracy: 0.1115 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 5.0000e-04 Epoch 12/100 234/234 [==============================] - ETA: 0s - loss: 0.5472 - capsnet_loss: 0.5236 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1114
Epoch 00012: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00012: capsnet_accuracy did not improve from 0.11146 234/234 [==============================] - 156s 667ms/step - loss: 0.5472 - capsnet_loss: 0.5236 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1114 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 5.0000e-04

amir-ghz commented 2 years ago

I directly run the code, and only got 10% performance on train set of MNIST dataset. For the val set, the accuracy keeps staying in 0.0000e+00. Is there anything wrong? lol.

Epoch 00001: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00001: capsnet_accuracy improved from -inf to 0.10517, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 157s 672ms/step - loss: 0.5766 - capsnet_loss: 0.5524 - decoder_loss: 0.0605 - capsnet_accuracy: 0.1052 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 2/100 234/234 [==============================] - ETA: 0s - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1057 Epoch 00002: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00002: capsnet_accuracy improved from 0.10517 to 0.10573, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 668ms/step - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1057 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 3/100 234/234 [==============================] - ETA: 0s - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1057 Epoch 00003: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00003: capsnet_accuracy did not improve from 0.10573 234/234 [==============================] - 156s 666ms/step - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1057 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 4/100 234/234 [==============================] - ETA: 0s - loss: 0.5478 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1051 Epoch 00004: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00004: capsnet_accuracy did not improve from 0.10573 234/234 [==============================] - 156s 666ms/step - loss: 0.5478 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1051 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 5/100 234/234 [==============================] - ETA: 0s - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1068 Epoch 00005: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00005: capsnet_accuracy improved from 0.10573 to 0.10682, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 667ms/step - loss: 0.5477 - capsnet_loss: 0.5241 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1068 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 6/100 234/234 [==============================] - ETA: 0s - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1073 Epoch 00006: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00006: capsnet_accuracy improved from 0.10682 to 0.10732, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 666ms/step - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1073 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 7/100 234/234 [==============================] - ETA: 0s - loss: 0.5476 - capsnet_loss: 0.5240 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1080 Epoch 00007: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00007: capsnet_accuracy improved from 0.10732 to 0.10803, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 667ms/step - loss: 0.5476 - capsnet_loss: 0.5240 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1080 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 8/100 234/234 [==============================] - ETA: 0s - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1092 Epoch 00008: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00008: capsnet_accuracy improved from 0.10803 to 0.10917, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 666ms/step - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1092 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 9/100 234/234 [==============================] - ETA: 0s - loss: 0.5475 - capsnet_loss: 0.5238 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1059 Epoch 00009: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00009: capsnet_accuracy did not improve from 0.10917 234/234 [==============================] - 156s 667ms/step - loss: 0.5475 - capsnet_loss: 0.5238 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1059 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 10/100 234/234 [==============================] - ETA: 0s - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1072 Epoch 00010: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00010: capsnet_accuracy did not improve from 0.10917 234/234 [==============================] - 156s 666ms/step - loss: 0.5475 - capsnet_loss: 0.5239 - decoder_loss: 0.0591 - capsnet_accuracy: 0.1072 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 0.0010 Epoch 11/100 234/234 [==============================] - ETA: 0s - loss: 0.5472 - capsnet_loss: 0.5236 - decoder_loss: 0.0589 - capsnet_accuracy: 0.1115 Epoch 00011: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00011: capsnet_accuracy improved from 0.10917 to 0.11146, saving model to model/CIFAR10/13/best_weights_2.h5 234/234 [==============================] - 156s 668ms/step - loss: 0.5472 - capsnet_loss: 0.5236 - decoder_loss: 0.0589 - capsnet_accuracy: 0.1115 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 5.0000e-04 Epoch 12/100 234/234 [==============================] - ETA: 0s - loss: 0.5472 - capsnet_loss: 0.5236 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1114 Epoch 00012: saving model to model/CIFAR10/13/best_weights_1.h5

Epoch 00012: capsnet_accuracy did not improve from 0.11146 234/234 [==============================] - 156s 667ms/step - loss: 0.5472 - capsnet_loss: 0.5236 - decoder_loss: 0.0590 - capsnet_accuracy: 0.1114 - val_loss: 0.0000e+00 - val_capsnet_loss: 0.0000e+00 - val_decoder_loss: 0.0000e+00 - val_capsnet_accuracy: 0.0000e+00 - lr: 5.0000e-04

The same thing is happening when I run the code. Did you find any workaround?!