geoffwoollard / ece1512_project

0 stars 0 forks source link

subtracted data #10

Open geoffwoollard opened 5 years ago

geoffwoollard commented 5 years ago

delete half / quarter / etc out of T20S.

Train classifiers for each dataset

Find threshold, where classifier unable to distinguish

Compare with 2D

geoffwoollard commented 5 years ago

@Davjes15

The missing quarter and half and full particle are distinguishable by 2D classification

geoffwoollard commented 5 years ago

Results for missing particle vs full particle.

We can get high 90s% test accuracy, but need ~15 epochs.

The dense layer size is critical.

Using 256 neurons the test accuracy picks up after 23 epochs, and goes to 100% after a few epochs.

With more neurons (512, 1024, 2046), things pick up faster.

This is only with 1000x2 particles. Using more data may help.

256
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_25 (Conv2D)           (None, 400, 400, 8)       1808      
_________________________________________________________________
conv2d_26 (Conv2D)           (None, 400, 400, 8)       14408     
_________________________________________________________________
activation_13 (Activation)   (None, 400, 400, 8)       0         
_________________________________________________________________
batch_normalization_13 (Batc (None, 400, 400, 8)       32        
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 200, 200, 8)       0         
_________________________________________________________________
conv2d_27 (Conv2D)           (None, 200, 200, 8)       3144      
_________________________________________________________________
conv2d_28 (Conv2D)           (None, 200, 200, 16)      6288      
_________________________________________________________________
activation_14 (Activation)   (None, 200, 200, 16)      0         
_________________________________________________________________
batch_normalization_14 (Batc (None, 200, 200, 16)      64        
_________________________________________________________________
max_pooling2d_11 (MaxPooling (None, 100, 100, 16)      0         
_________________________________________________________________
conv2d_29 (Conv2D)           (None, 100, 100, 32)      4640      
_________________________________________________________________
conv2d_30 (Conv2D)           (None, 100, 100, 32)      9248      
_________________________________________________________________
activation_15 (Activation)   (None, 100, 100, 32)      0         
_________________________________________________________________
batch_normalization_15 (Batc (None, 100, 100, 32)      128       
_________________________________________________________________
max_pooling2d_12 (MaxPooling (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_31 (Conv2D)           (None, 50, 50, 64)        18496     
_________________________________________________________________
conv2d_32 (Conv2D)           (None, 50, 50, 64)        36928     
_________________________________________________________________
activation_16 (Activation)   (None, 50, 50, 64)        0         
_________________________________________________________________
batch_normalization_16 (Batc (None, 50, 50, 64)        256       
_________________________________________________________________
average_pooling2d_4 (Average (None, 25, 25, 64)        0         
_________________________________________________________________
flatten_4 (Flatten)          (None, 40000)             0         
_________________________________________________________________
dense_7 (Dense)              (None, 256)               10240256  
_________________________________________________________________
dropout_4 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_8 (Dense)              (None, 2)                 514       
=================================================================
Total params: 10,336,210
Trainable params: 10,335,970
Non-trainable params: 240
_________________________________________________________________
None
Epoch 1/1
18/18 [==============================] - 59s 3s/step - loss: 1.1434 - categorical_accuracy: 0.4933
200/200 [==============================] - 2s 10ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.7223 - categorical_accuracy: 0.4972
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.7150 - categorical_accuracy: 0.4983
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.7105 - categorical_accuracy: 0.4900
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.7103 - categorical_accuracy: 0.5194
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 52.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.7086 - categorical_accuracy: 0.5067
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 52.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.7096 - categorical_accuracy: 0.5078
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.00%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.7066 - categorical_accuracy: 0.5372
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.50%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.7054 - categorical_accuracy: 0.5239
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.7004 - categorical_accuracy: 0.5556
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6828 - categorical_accuracy: 0.5922
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6217 - categorical_accuracy: 0.6694
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.4569 - categorical_accuracy: 0.8072
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.2613 - categorical_accuracy: 0.9072
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.1733 - categorical_accuracy: 0.9472
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.1179 - categorical_accuracy: 0.9717
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0833 - categorical_accuracy: 0.9806
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0656 - categorical_accuracy: 0.9889
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0427 - categorical_accuracy: 0.9922
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0417 - categorical_accuracy: 0.9961
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0367 - categorical_accuracy: 0.9972
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 53.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0350 - categorical_accuracy: 0.9950
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0226 - categorical_accuracy: 0.9989
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 68.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0301 - categorical_accuracy: 0.9972
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 73.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0274 - categorical_accuracy: 0.9956
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 82.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0108 - categorical_accuracy: 0.9989
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 87.00%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.0093 - categorical_accuracy: 0.9994
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 92.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0096 - categorical_accuracy: 1.0000
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 97.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0071 - categorical_accuracy: 1.0000
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 95.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0092 - categorical_accuracy: 0.9978
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 97.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0065 - categorical_accuracy: 0.9994
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 100.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0055 - categorical_accuracy: 0.9994
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 97.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0055 - categorical_accuracy: 1.0000
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 100.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0053 - categorical_accuracy: 0.9994
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 100.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0065 - categorical_accuracy: 0.9989
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 100.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0034 - categorical_accuracy: 1.0000
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 95.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0039 - categorical_accuracy: 1.0000
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 100.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0036 - categorical_accuracy: 1.0000
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 100.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0030 - categorical_accuracy: 1.0000
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 100.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0031 - categorical_accuracy: 1.0000
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 100.00%
256
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_9 (Conv2D)            (None, 400, 400, 8)       1808      
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 400, 400, 8)       14408     
_________________________________________________________________
activation_5 (Activation)    (None, 400, 400, 8)       0         
_________________________________________________________________
batch_normalization_5 (Batch (None, 400, 400, 8)       32        
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 200, 200, 8)       0         
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 200, 200, 8)       3144      
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 200, 200, 16)      6288      
_________________________________________________________________
activation_6 (Activation)    (None, 200, 200, 16)      0         
_________________________________________________________________
batch_normalization_6 (Batch (None, 200, 200, 16)      64        
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 100, 100, 16)      0         
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 100, 100, 32)      4640      
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 100, 100, 32)      9248      
_________________________________________________________________
activation_7 (Activation)    (None, 100, 100, 32)      0         
_________________________________________________________________
batch_normalization_7 (Batch (None, 100, 100, 32)      128       
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 50, 50, 64)        18496     
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 50, 50, 64)        36928     
_________________________________________________________________
activation_8 (Activation)    (None, 50, 50, 64)        0         
_________________________________________________________________
batch_normalization_8 (Batch (None, 50, 50, 64)        256       
_________________________________________________________________
average_pooling2d_2 (Average (None, 25, 25, 64)        0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 40000)             0         
_________________________________________________________________
dense_3 (Dense)              (None, 256)               10240256  
_________________________________________________________________
dropout_2 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 2)                 514       
=================================================================
Total params: 10,336,210
Trainable params: 10,335,970
Non-trainable params: 240
_________________________________________________________________
None
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/1
18/18 [==============================] - 320s 18s/step - loss: 1.0584 - categorical_accuracy: 0.4933
200/200 [==============================] - 4s 21ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6985 - categorical_accuracy: 0.4783
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6958 - categorical_accuracy: 0.4972
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.00%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6944 - categorical_accuracy: 0.4911
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6939 - categorical_accuracy: 0.5100
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 49.00%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6938 - categorical_accuracy: 0.4900
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 49.00%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6946 - categorical_accuracy: 0.4756
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 48.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6929 - categorical_accuracy: 0.5100
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 49.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6940 - categorical_accuracy: 0.4883
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6938 - categorical_accuracy: 0.4911
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 49.50%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6932 - categorical_accuracy: 0.5011
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 55.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6932 - categorical_accuracy: 0.4961
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 52.00%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6940 - categorical_accuracy: 0.5106
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 58.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6939 - categorical_accuracy: 0.4828
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.00%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.6933 - categorical_accuracy: 0.5117
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6941 - categorical_accuracy: 0.4856
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 53.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6933 - categorical_accuracy: 0.5144
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 54.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6938 - categorical_accuracy: 0.5017
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 53.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6947 - categorical_accuracy: 0.4906
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 57.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.6937 - categorical_accuracy: 0.5161
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 54.00%
512
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_9 (Conv2D)            (None, 400, 400, 8)       1808      
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 400, 400, 8)       14408     
_________________________________________________________________
activation_5 (Activation)    (None, 400, 400, 8)       0         
_________________________________________________________________
batch_normalization_5 (Batch (None, 400, 400, 8)       32        
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 200, 200, 8)       0         
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 200, 200, 8)       3144      
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 200, 200, 16)      6288      
_________________________________________________________________
activation_6 (Activation)    (None, 200, 200, 16)      0         
_________________________________________________________________
batch_normalization_6 (Batch (None, 200, 200, 16)      64        
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 100, 100, 16)      0         
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 100, 100, 32)      4640      
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 100, 100, 32)      9248      
_________________________________________________________________
activation_7 (Activation)    (None, 100, 100, 32)      0         
_________________________________________________________________
batch_normalization_7 (Batch (None, 100, 100, 32)      128       
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 50, 50, 64)        18496     
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 50, 50, 64)        36928     
_________________________________________________________________
activation_8 (Activation)    (None, 50, 50, 64)        0         
_________________________________________________________________
batch_normalization_8 (Batch (None, 50, 50, 64)        256       
_________________________________________________________________
average_pooling2d_2 (Average (None, 25, 25, 64)        0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 40000)             0         
_________________________________________________________________
dense_3 (Dense)              (None, 512)               20480512  
_________________________________________________________________
dropout_2 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 2)                 1026      
=================================================================
Total params: 20,576,978
Trainable params: 20,576,738
Non-trainable params: 240
_________________________________________________________________
None
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/1
18/18 [==============================] - 254s 14s/step - loss: 1.5514 - categorical_accuracy: 0.5128
200/200 [==============================] - 4s 21ms/step
categorical_accuracy: 51.00%
Epoch 1/1
18/18 [==============================] - 56s 3s/step - loss: 0.6951 - categorical_accuracy: 0.5028
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.00%
Epoch 1/1
18/18 [==============================] - 56s 3s/step - loss: 0.6925 - categorical_accuracy: 0.5200
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 53.00%
Epoch 1/1
18/18 [==============================] - 56s 3s/step - loss: 0.6849 - categorical_accuracy: 0.5606
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 54.00%
Epoch 1/1
18/18 [==============================] - 56s 3s/step - loss: 0.6594 - categorical_accuracy: 0.5994
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.50%
Epoch 1/1
18/18 [==============================] - 56s 3s/step - loss: 0.5752 - categorical_accuracy: 0.6994
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.3968 - categorical_accuracy: 0.8344
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.2396 - categorical_accuracy: 0.9078
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.00%
Epoch 1/1
18/18 [==============================] - 56s 3s/step - loss: 0.1776 - categorical_accuracy: 0.9433
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 51.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.1273 - categorical_accuracy: 0.9572
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 56.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0975 - categorical_accuracy: 0.9750
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 91.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0769 - categorical_accuracy: 0.9794
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 95.50%
Epoch 1/1
18/18 [==============================] - 56s 3s/step - loss: 0.0681 - categorical_accuracy: 0.9839
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 97.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0555 - categorical_accuracy: 0.9856
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 97.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0501 - categorical_accuracy: 0.9906
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 91.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0366 - categorical_accuracy: 0.9950
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 94.50%
Epoch 1/1
 2/18 [==>...........................] - ETA: 52s - loss: 0.0287 - categorical_accuracy: 1.0000
1024
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_65 (Conv2D)           (None, 400, 400, 8)       1808      
_________________________________________________________________
conv2d_66 (Conv2D)           (None, 400, 400, 8)       14408     
_________________________________________________________________
activation_33 (Activation)   (None, 400, 400, 8)       0         
_________________________________________________________________
batch_normalization_33 (Batc (None, 400, 400, 8)       32        
_________________________________________________________________
max_pooling2d_25 (MaxPooling (None, 200, 200, 8)       0         
_________________________________________________________________
conv2d_67 (Conv2D)           (None, 200, 200, 8)       3144      
_________________________________________________________________
conv2d_68 (Conv2D)           (None, 200, 200, 16)      6288      
_________________________________________________________________
activation_34 (Activation)   (None, 200, 200, 16)      0         
_________________________________________________________________
batch_normalization_34 (Batc (None, 200, 200, 16)      64        
_________________________________________________________________
max_pooling2d_26 (MaxPooling (None, 100, 100, 16)      0         
_________________________________________________________________
conv2d_69 (Conv2D)           (None, 100, 100, 32)      4640      
_________________________________________________________________
conv2d_70 (Conv2D)           (None, 100, 100, 32)      9248      
_________________________________________________________________
activation_35 (Activation)   (None, 100, 100, 32)      0         
_________________________________________________________________
batch_normalization_35 (Batc (None, 100, 100, 32)      128       
_________________________________________________________________
max_pooling2d_27 (MaxPooling (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_71 (Conv2D)           (None, 50, 50, 64)        18496     
_________________________________________________________________
conv2d_72 (Conv2D)           (None, 50, 50, 64)        36928     
_________________________________________________________________
activation_36 (Activation)   (None, 50, 50, 64)        0         
_________________________________________________________________
batch_normalization_36 (Batc (None, 50, 50, 64)        256       
_________________________________________________________________
average_pooling2d_9 (Average (None, 25, 25, 64)        0         
_________________________________________________________________
flatten_9 (Flatten)          (None, 40000)             0         
_________________________________________________________________
dense_17 (Dense)             (None, 1024)              40961024  
_________________________________________________________________
dropout_9 (Dropout)          (None, 1024)              0         
_________________________________________________________________
dense_18 (Dense)             (None, 2)                 2050      
=================================================================
Total params: 41,058,514
Trainable params: 41,058,274
Non-trainable params: 240
_________________________________________________________________
None
Epoch 1/1
18/18 [==============================] - 60s 3s/step - loss: 1.5270 - categorical_accuracy: 0.5006
200/200 [==============================] - 3s 13ms/step
categorical_accuracy: 54.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.8344 - categorical_accuracy: 0.4889
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.7067 - categorical_accuracy: 0.4922
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.7128 - categorical_accuracy: 0.4994
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.6982 - categorical_accuracy: 0.5261
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 49.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.6651 - categorical_accuracy: 0.6239
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.5514 - categorical_accuracy: 0.7222
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 59.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.3762 - categorical_accuracy: 0.8439
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 78.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.2143 - categorical_accuracy: 0.9200
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 76.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.1537 - categorical_accuracy: 0.9483
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 62.00%
2048
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_73 (Conv2D)           (None, 400, 400, 8)       1808      
_________________________________________________________________
conv2d_74 (Conv2D)           (None, 400, 400, 8)       14408     
_________________________________________________________________
activation_37 (Activation)   (None, 400, 400, 8)       0         
_________________________________________________________________
batch_normalization_37 (Batc (None, 400, 400, 8)       32        
_________________________________________________________________
max_pooling2d_28 (MaxPooling (None, 200, 200, 8)       0         
_________________________________________________________________
conv2d_75 (Conv2D)           (None, 200, 200, 8)       3144      
_________________________________________________________________
conv2d_76 (Conv2D)           (None, 200, 200, 16)      6288      
_________________________________________________________________
activation_38 (Activation)   (None, 200, 200, 16)      0         
_________________________________________________________________
batch_normalization_38 (Batc (None, 200, 200, 16)      64        
_________________________________________________________________
max_pooling2d_29 (MaxPooling (None, 100, 100, 16)      0         
_________________________________________________________________
conv2d_77 (Conv2D)           (None, 100, 100, 32)      4640      
_________________________________________________________________
conv2d_78 (Conv2D)           (None, 100, 100, 32)      9248      
_________________________________________________________________
activation_39 (Activation)   (None, 100, 100, 32)      0         
_________________________________________________________________
batch_normalization_39 (Batc (None, 100, 100, 32)      128       
_________________________________________________________________
max_pooling2d_30 (MaxPooling (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_79 (Conv2D)           (None, 50, 50, 64)        18496     
_________________________________________________________________
conv2d_80 (Conv2D)           (None, 50, 50, 64)        36928     
_________________________________________________________________
activation_40 (Activation)   (None, 50, 50, 64)        0         
_________________________________________________________________
batch_normalization_40 (Batc (None, 50, 50, 64)        256       
_________________________________________________________________
average_pooling2d_10 (Averag (None, 25, 25, 64)        0         
_________________________________________________________________
flatten_10 (Flatten)         (None, 40000)             0         
_________________________________________________________________
dense_19 (Dense)             (None, 2048)              81922048  
_________________________________________________________________
dropout_10 (Dropout)         (None, 2048)              0         
_________________________________________________________________
dense_20 (Dense)             (None, 2)                 4098      
=================================================================
Total params: 82,021,586
Trainable params: 82,021,346
Non-trainable params: 240
_________________________________________________________________
None
Epoch 1/1
18/18 [==============================] - 61s 3s/step - loss: 6.3784 - categorical_accuracy: 0.4928
200/200 [==============================] - 3s 14ms/step
categorical_accuracy: 49.50%
Epoch 1/1
18/18 [==============================] - 54s 3s/step - loss: 6.4778 - categorical_accuracy: 0.5022
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 49.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 1.8117 - categorical_accuracy: 0.4978
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.6724 - categorical_accuracy: 0.5833
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.4711 - categorical_accuracy: 0.7800
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 49.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.2860 - categorical_accuracy: 0.8922
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 49.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.1923 - categorical_accuracy: 0.9372
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 50.00%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.1374 - categorical_accuracy: 0.9600
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 55.50%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.1134 - categorical_accuracy: 0.9706
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 74.00%
Epoch 1/1
18/18 [==============================] - 55s 3s/step - loss: 0.0787 - categorical_accuracy: 0.9839
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 88.50%
4096
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_81 (Conv2D)           (None, 400, 400, 8)       1808      
_________________________________________________________________
conv2d_82 (Conv2D)           (None, 400, 400, 8)       14408     
_________________________________________________________________
activation_41 (Activation)   (None, 400, 400, 8)       0         
_________________________________________________________________
batch_normalization_41 (Batc (None, 400, 400, 8)       32        
_________________________________________________________________
max_pooling2d_31 (MaxPooling (None, 200, 200, 8)       0         
_________________________________________________________________
conv2d_83 (Conv2D)           (None, 200, 200, 8)       3144      
_________________________________________________________________
conv2d_84 (Conv2D)           (None, 200, 200, 16)      6288      
_________________________________________________________________
activation_42 (Activation)   (None, 200, 200, 16)      0         
_________________________________________________________________
batch_normalization_42 (Batc (None, 200, 200, 16)      64        
_________________________________________________________________
max_pooling2d_32 (MaxPooling (None, 100, 100, 16)      0         
_________________________________________________________________
conv2d_85 (Conv2D)           (None, 100, 100, 32)      4640      
_________________________________________________________________
conv2d_86 (Conv2D)           (None, 100, 100, 32)      9248      
_________________________________________________________________
activation_43 (Activation)   (None, 100, 100, 32)      0         
_________________________________________________________________
batch_normalization_43 (Batc (None, 100, 100, 32)      128       
_________________________________________________________________
max_pooling2d_33 (MaxPooling (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_87 (Conv2D)           (None, 50, 50, 64)        18496     
_________________________________________________________________
conv2d_88 (Conv2D)           (None, 50, 50, 64)        36928     
_________________________________________________________________
activation_44 (Activation)   (None, 50, 50, 64)        0         
_________________________________________________________________
batch_normalization_44 (Batc (None, 50, 50, 64)        256       
_________________________________________________________________
average_pooling2d_11 (Averag (None, 25, 25, 64)        0         
_________________________________________________________________
flatten_11 (Flatten)         (None, 40000)             0         
_________________________________________________________________
dense_21 (Dense)             (None, 4096)              163844096 
_________________________________________________________________
dropout_11 (Dropout)         (None, 4096)              0         
_________________________________________________________________
dense_22 (Dense)             (None, 2)                 8194      
=================================================================
Total params: 163,947,730
Trainable params: 163,947,490
Non-trainable params: 240
_________________________________________________________________
None
Epoch 1/1
18/18 [==============================] - 66s 4s/step - loss: 2.8889 - categorical_accuracy: 0.5739
200/200 [==============================] - 3s 15ms/step
categorical_accuracy: 49.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.2635 - categorical_accuracy: 0.8939
200/200 [==============================] - 2s 10ms/step
categorical_accuracy: 50.00%
Epoch 1/1
18/18 [==============================] - 58s 3s/step - loss: 0.1346 - categorical_accuracy: 0.9544
200/200 [==============================] - 2s 10ms/step
categorical_accuracy: 83.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0870 - categorical_accuracy: 0.9778
200/200 [==============================] - 2s 10ms/step
categorical_accuracy: 91.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0649 - categorical_accuracy: 0.9850
200/200 [==============================] - 2s 10ms/step
categorical_accuracy: 96.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0445 - categorical_accuracy: 0.9911
200/200 [==============================] - 2s 10ms/step
categorical_accuracy: 98.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0355 - categorical_accuracy: 0.9939
200/200 [==============================] - 2s 10ms/step
categorical_accuracy: 99.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0324 - categorical_accuracy: 0.9956
200/200 [==============================] - 2s 10ms/step
categorical_accuracy: 100.00%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0248 - categorical_accuracy: 0.9978
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 99.50%
Epoch 1/1
18/18 [==============================] - 57s 3s/step - loss: 0.0255 - categorical_accuracy: 0.9956
200/200 [==============================] - 2s 9ms/step
categorical_accuracy: 99.50%
Davjes15 commented 5 years ago

@geoffwoollard what is the architecture of this network? I want to use it with the noise images. I cannot get more that 48% accuracy running the original deep consensus network with 1024 dense layer I am trying to create the performance graphs for the noise images

geoffwoollard commented 5 years ago

The architecture is shown in the model.summary() output. I used deep_consensus.deep_consensus_wrapper to build the model. At the beginning I mention the dataset used.

Davjes15 commented 5 years ago

@geoffwoollard Thank you I got 90% accuracy with the noisy images.

geoffwoollard commented 5 years ago

I'm still having a hard time with the empirical data. It should be easy to recognize if a particle is present/absent since one can tell by eye.

I can get good training error, but the testing error is random. A few times it got to 80%, so probably not by chance. I saved these models (model-missing-particle-J125-J128-stride4-1.8Mparams-ep- etc)

model = deepconsensus_layers_wrapper(
    input_shape=X_val.shape[1::],
    num_hidden_layers=4,

    conv2d1_k=(30,15,7,3),
    conv2d2_k=(30,15,7,3),
    conv2d1_n=(8,8,32,64),
    conv2d2_n=(8,16,32,64),
    mp_k=(3,3,3,4),
    mp_strides=(4,4,2,2),
  pooling_type=('max','max','max','av'),
)
model.summary()
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_9 (Conv2D)            (None, 400, 400, 8)       7208      
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 400, 400, 8)       57608     
_________________________________________________________________
batch_normalization_5 (Batch (None, 400, 400, 8)       32        
_________________________________________________________________
activation_5 (Activation)    (None, 400, 400, 8)       0         
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 100, 100, 8)       0         
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 100, 100, 8)       14408     
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 100, 100, 16)      28816     
_________________________________________________________________
batch_normalization_6 (Batch (None, 100, 100, 16)      64        
_________________________________________________________________
activation_6 (Activation)    (None, 100, 100, 16)      0         
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 25, 25, 16)        0         
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 25, 25, 32)        25120     
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 25, 25, 32)        50208     
_________________________________________________________________
batch_normalization_7 (Batch (None, 25, 25, 32)        128       
_________________________________________________________________
activation_7 (Activation)    (None, 25, 25, 32)        0         
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 13, 13, 64)        18496     
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 13, 13, 64)        36928     
_________________________________________________________________
batch_normalization_8 (Batch (None, 13, 13, 64)        256       
_________________________________________________________________
activation_8 (Activation)    (None, 13, 13, 64)        0         
_________________________________________________________________
average_pooling2d_2 (Average (None, 7, 7, 64)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 3136)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 512)               1606144   
_________________________________________________________________
dropout_2 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 2)                 1026      
=================================================================
Total params: 1,846,442
Trainable params: 1,846,202
Non-trainable params: 240
epoch 0
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/1
18/18 [==============================] - 473s 26s/step - loss: 1.3503 - categorical_accuracy: 0.4767
200/200 [==============================] - 15s 76ms/step
categorical_accuracy: 49.00%
epoch 1
Epoch 1/1
18/18 [==============================] - 382s 21s/step - loss: 0.7236 - categorical_accuracy: 0.4956
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 2
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.7028 - categorical_accuracy: 0.5200
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 3
Epoch 1/1
18/18 [==============================] - 382s 21s/step - loss: 0.6751 - categorical_accuracy: 0.5644
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 4
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.5238 - categorical_accuracy: 0.7639
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 5
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.3713 - categorical_accuracy: 0.8494
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 6
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.3106 - categorical_accuracy: 0.8733
200/200 [==============================] - 14s 71ms/step
categorical_accuracy: 49.00%
epoch 7
Epoch 1/1
18/18 [==============================] - 382s 21s/step - loss: 0.2807 - categorical_accuracy: 0.8828
200/200 [==============================] - 15s 75ms/step
categorical_accuracy: 49.00%
epoch 8
Epoch 1/1
18/18 [==============================] - 821s 46s/step - loss: 0.2498 - categorical_accuracy: 0.8994
200/200 [==============================] - 35s 177ms/step
categorical_accuracy: 49.50%
epoch 9
Epoch 1/1
18/18 [==============================] - 754s 42s/step - loss: 0.2311 - categorical_accuracy: 0.9167
200/200 [==============================] - 35s 177ms/step
categorical_accuracy: 80.50%
epoch 10
Epoch 1/1
18/18 [==============================] - 434s 24s/step - loss: 0.2163 - categorical_accuracy: 0.9200
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 75.00%
epoch 11
Epoch 1/1
18/18 [==============================] - 380s 21s/step - loss: 0.1878 - categorical_accuracy: 0.9317
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 12
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.1671 - categorical_accuracy: 0.9528
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 13
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.1470 - categorical_accuracy: 0.9578
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 14
Epoch 1/1
18/18 [==============================] - 382s 21s/step - loss: 0.0958 - categorical_accuracy: 0.9800
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 15
Epoch 1/1
18/18 [==============================] - 382s 21s/step - loss: 0.0683 - categorical_accuracy: 0.9889
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 16
Epoch 1/1
18/18 [==============================] - 382s 21s/step - loss: 0.0582 - categorical_accuracy: 0.9917
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 17
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.0397 - categorical_accuracy: 0.9922
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 18
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.0230 - categorical_accuracy: 0.9956
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 49.00%
epoch 19
Epoch 1/1
18/18 [==============================] - 382s 21s/step - loss: 0.0162 - categorical_accuracy: 0.9983
200/200 [==============================] - 14s 71ms/step
categorical_accuracy: 49.00%
epoch 20
Epoch 1/1
18/18 [==============================] - 381s 21s/step - loss: 0.0130 - categorical_accuracy: 0.9989
200/200 [==============================] - 14s 70ms/step
categorical_accuracy: 50.00%
epoch 21
Epoch 1/1
15/18 [========================>.....] - ETA: 1:06 - loss: 0.0111 - categorical_accuracy: 0.9993
geoffwoollard commented 5 years ago

Can get 90% validation auc on the missing particle / present particle

model = deepconsensus_layers_wrapper(
    input_shape=X_val.shape[1::],
    num_hidden_layers=4,

    conv2d1_k=(15,15,7,3),
    conv2d2_k=(15,15,7,3),
    conv2d1_n=(8,8,32,64),
    conv2d2_n=(8,16,32,64),
    mp_k=(3,3,3,4),
    mp_strides=(4,4,2,2),
  pooling_type=('max','max','max','av'),
  dense13_n=512,
    dropout13_rate=0.5
)
model.summary()

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 256, 256, 8)       1808      
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 256, 256, 8)       14408     
_________________________________________________________________
batch_normalization_1 (Batch (None, 256, 256, 8)       32        
_________________________________________________________________
activation_1 (Activation)    (None, 256, 256, 8)       0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 64, 8)         0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 64, 64, 8)         14408     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 64, 64, 16)        28816     
_________________________________________________________________
batch_normalization_2 (Batch (None, 64, 64, 16)        64        
_________________________________________________________________
activation_2 (Activation)    (None, 64, 64, 16)        0         
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 16, 16, 16)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 16, 16, 32)        25120     
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 16, 16, 32)        50208     
_________________________________________________________________
batch_normalization_3 (Batch (None, 16, 16, 32)        128       
_________________________________________________________________
activation_3 (Activation)    (None, 16, 16, 32)        0         
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 8, 8, 32)          0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 8, 8, 64)          18496     
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 8, 8, 64)          36928     
_________________________________________________________________
batch_normalization_4 (Batch (None, 8, 8, 64)          256       
_________________________________________________________________
activation_4 (Activation)    (None, 8, 8, 64)          0         
_________________________________________________________________
average_pooling2d_1 (Average (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1024)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               524800    
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 1026      
=================================================================
Total params: 716,498
Trainable params: 716,258
Non-trainable params: 240

for epoch in range(0,2):
    print('epoch %i' % epoch)
    model.fit_generator(fit_generator_helper.image_loader(df_train,batch_size=batch_size,nx=nx,ny=ny,crop_n=256),
                    steps_per_epoch=steps_per_epoch, # steps_per_epoch is number of batches per epoch
                    epochs=1,
                   )
    roc_auc_train,_,_,_ = roc_auc(X_train,Y_train,model)
    print('train auc=%.2f'%roc_auc_train)

    scores = model.evaluate(X_val, Y_val)
    print("%s: %.2f%%" % (model.metrics_names[1], scores[1] * 100))

    roc_aucs[epoch],fprs[epoch], tprs[epoch], thresholds[epoch] = roc_auc(X_val,Y_val,model)
    print('val auc=%.2f'%roc_aucs[epoch])

    title='models/model-missing-particle-J125-J128-0.7Mparams-ep-%s-' % epoch
    model_yaml = model.to_yaml()
    with open(title+timestr+'.yaml', "w") as yaml_file:
        yaml_file.write(model_yaml)
    model.save_weights(title+timestr+".h5")

epoch 0
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/1
18/18 [==============================] - 273s 15s/step - loss: 0.8261 - categorical_accuracy: 0.4756
train auc=0.48
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 49.00%
val auc=0.50
epoch 1
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.7318 - categorical_accuracy: 0.5083
train auc=0.53
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.46
epoch 3
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.7195 - categorical_accuracy: 0.5017
train auc=0.50
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.43
epoch 4
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.7174 - categorical_accuracy: 0.4800
train auc=0.50
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.49
epoch 5
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.6486 - categorical_accuracy: 0.6089
train auc=0.67
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.68
epoch 6
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.3943 - categorical_accuracy: 0.8378
train auc=0.89
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.85
epoch 7
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.3084 - categorical_accuracy: 0.8789
train auc=0.91
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.86
epoch 8
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.2706 - categorical_accuracy: 0.8961
train auc=0.92
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.87
epoch 9
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.2393 - categorical_accuracy: 0.9100
train auc=0.94
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.87
epoch 10
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.2005 - categorical_accuracy: 0.9333
train auc=0.95
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.85
epoch 11
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.1666 - categorical_accuracy: 0.9467
train auc=0.94
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.84
epoch 12
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.1477 - categorical_accuracy: 0.9506
train auc=0.80
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.72
epoch 13
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0995 - categorical_accuracy: 0.9767
train auc=0.95
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 55.50%
val auc=0.85
epoch 14
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.0719 - categorical_accuracy: 0.9856
train auc=0.98
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 60.50%
val auc=0.88
epoch 15
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0519 - categorical_accuracy: 0.9894
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 85.50%
val auc=0.93
epoch 16
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0534 - categorical_accuracy: 0.9872
train auc=0.94
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 51.00%
val auc=0.84
epoch 17
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0336 - categorical_accuracy: 0.9933
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 77.50%
val auc=0.91
epoch 18
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0214 - categorical_accuracy: 0.9972
train auc=0.97
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 62.00%
val auc=0.85
epoch 19
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0178 - categorical_accuracy: 0.9967
train auc=0.90
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.79
epoch 20
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0122 - categorical_accuracy: 0.9978
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 85.50%
val auc=0.93
epoch 21
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0203 - categorical_accuracy: 0.9944
train auc=0.93
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.50%
val auc=0.83
epoch 22
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.0142 - categorical_accuracy: 0.9983
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 66.50%
val auc=0.93
epoch 23
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.0132 - categorical_accuracy: 0.9961
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 69.00%
val auc=0.93
epoch 24
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.0089 - categorical_accuracy: 0.9989
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 76.50%
val auc=0.94
epoch 25
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.0079 - categorical_accuracy: 0.9989
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 49.50%
val auc=0.92
epoch 26
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0051 - categorical_accuracy: 0.9994
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 52.50%
val auc=0.93
epoch 27
Epoch 1/1
18/18 [==============================] - 21s 1s/step - loss: 0.0035 - categorical_accuracy: 1.0000
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.00%
val auc=0.93
epoch 28
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0030 - categorical_accuracy: 1.0000
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.00%
val auc=0.93
epoch 29
Epoch 1/1
18/18 [==============================] - 22s 1s/step - loss: 0.0025 - categorical_accuracy: 1.0000
train auc=1.00
200/200 [==============================] - 1s 3ms/step
categorical_accuracy: 50.00%
val auc=0.92
geoffwoollard commented 5 years ago

Using the same architecture as above, but not cropping so closely, we can train and get pretty good validation performance, but it takes longer.

Mis-named models as models/model-missing-particle-J125-J128-0.7Mparams-ep-%s-, should be models/model-missing-particle-J125-J128-400pix-1.8Mparams-ep-%s- (for ep 3 to 29)

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_9 (Conv2D)            (None, 400, 400, 8)       1808      
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 400, 400, 8)       14408     
_________________________________________________________________
batch_normalization_5 (Batch (None, 400, 400, 8)       32        
_________________________________________________________________
activation_5 (Activation)    (None, 400, 400, 8)       0         
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 100, 100, 8)       0         
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 100, 100, 8)       14408     
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 100, 100, 16)      28816     
_________________________________________________________________
batch_normalization_6 (Batch (None, 100, 100, 16)      64        
_________________________________________________________________
activation_6 (Activation)    (None, 100, 100, 16)      0         
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 25, 25, 16)        0         
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 25, 25, 32)        25120     
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 25, 25, 32)        50208     
_________________________________________________________________
batch_normalization_7 (Batch (None, 25, 25, 32)        128       
_________________________________________________________________
activation_7 (Activation)    (None, 25, 25, 32)        0         
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 13, 13, 64)        18496     
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 13, 13, 64)        36928     
_________________________________________________________________
batch_normalization_8 (Batch (None, 13, 13, 64)        256       
_________________________________________________________________
activation_8 (Activation)    (None, 13, 13, 64)        0         
_________________________________________________________________
average_pooling2d_2 (Average (None, 7, 7, 64)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 3136)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 512)               1606144   
_________________________________________________________________
dropout_2 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 2)                 1026      
=================================================================
Total params: 1,797,842
Trainable params: 1,797,602
Non-trainable params: 240
_________________________________________________________________

for epoch in range(0,2):
    print('epoch %i' % epoch)
    model.fit_generator(fit_generator_helper.image_loader(df_train,batch_size=batch_size,nx=nx,ny=ny,crop_n=400),
                    steps_per_epoch=steps_per_epoch, # steps_per_epoch is number of batches per epoch
                    epochs=1,
                   )
    roc_auc_train,_,_,_ = roc_auc(X_train,Y_train,model)
    print('train auc=%.2f'%roc_auc_train)

    scores = model.evaluate(X_val, Y_val)
    print("%s: %.2f%%" % (model.metrics_names[1], scores[1] * 100))

    roc_aucs[epoch],fprs[epoch], tprs[epoch], thresholds[epoch] = roc_auc(X_val,Y_val,model)
    print('val auc=%.2f'%roc_aucs[epoch])

    title='models/model-missing-particle-J125-J128-400pix-1.8Mparams-ep-%s-' % epoch
    model_yaml = model.to_yaml()
    with open(title+timestr+'.yaml', "w") as yaml_file:
        yaml_file.write(model_yaml)
    model.save_weights(title+timestr+".h5")

epoch 0
Epoch 1/1
18/18 [==============================] - 69s 4s/step - loss: 1.2276 - categorical_accuracy: 0.5056
train auc=0.49
200/200 [==============================] - 2s 11ms/step
categorical_accuracy: 49.50%
val auc=0.51
epoch 1
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.7193 - categorical_accuracy: 0.4978
train auc=0.57
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 48.00%
val auc=0.50

epoch 3
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.7166 - categorical_accuracy: 0.4911
train auc=0.57
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 48.50%
val auc=0.51
epoch 4
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.7037 - categorical_accuracy: 0.5039
train auc=0.49
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 48.50%
val auc=0.33
epoch 5
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6995 - categorical_accuracy: 0.5033
train auc=0.45
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 44.50%
val auc=0.42
epoch 6
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6979 - categorical_accuracy: 0.4950
train auc=0.51
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 50.50%
val auc=0.45
epoch 7
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6957 - categorical_accuracy: 0.5128
train auc=0.49
200/200 [==============================] - 2s 8ms/step
categorical_accuracy: 53.00%
val auc=0.45
epoch 8
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6980 - categorical_accuracy: 0.4967
train auc=0.56
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 41.50%
val auc=0.36
epoch 9
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6987 - categorical_accuracy: 0.4894
train auc=0.59
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 48.50%
val auc=0.37
epoch 10
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6949 - categorical_accuracy: 0.5100
train auc=0.48
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 49.50%
val auc=0.41
epoch 11
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6965 - categorical_accuracy: 0.4983
train auc=0.46
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 50.00%
val auc=0.44
epoch 12
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6876 - categorical_accuracy: 0.5528
train auc=0.44
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 50.50%
val auc=0.50
epoch 13
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.6536 - categorical_accuracy: 0.6294
train auc=0.30
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 50.50%
val auc=0.43
epoch 14
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.5138 - categorical_accuracy: 0.7744
train auc=0.87
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 53.00%
val auc=0.84
epoch 15
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.3987 - categorical_accuracy: 0.8367
train auc=0.91
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 50.50%
val auc=0.85
epoch 16
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.3065 - categorical_accuracy: 0.8806
train auc=0.94
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 51.00%
val auc=0.89
epoch 17
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.2649 - categorical_accuracy: 0.9000
train auc=0.95
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 51.50%
val auc=0.90
epoch 18
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.2341 - categorical_accuracy: 0.9133
train auc=0.95
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 61.00%
val auc=0.90
epoch 19
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.2147 - categorical_accuracy: 0.9189
train auc=0.95
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 84.50%
val auc=0.90
epoch 20
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.1927 - categorical_accuracy: 0.9272
train auc=0.97
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 51.00%
val auc=0.93
epoch 21
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.1736 - categorical_accuracy: 0.9400
train auc=0.95
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 81.50%
val auc=0.92
epoch 22
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.1491 - categorical_accuracy: 0.9517
train auc=0.95
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 83.00%
val auc=0.91
epoch 23
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.1354 - categorical_accuracy: 0.9600
train auc=0.93
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 84.00%
val auc=0.89
epoch 24
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.1110 - categorical_accuracy: 0.9683
train auc=0.95
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 62.50%
val auc=0.91
epoch 25
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.1092 - categorical_accuracy: 0.9678
train auc=0.96
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 51.50%
val auc=0.92
epoch 26
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.1083 - categorical_accuracy: 0.9683
train auc=0.98
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 74.00%
val auc=0.94
epoch 27
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.0899 - categorical_accuracy: 0.9789
train auc=0.96
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 51.50%
val auc=0.91
epoch 28
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.0776 - categorical_accuracy: 0.9811
train auc=0.99
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 65.00%
val auc=0.95
epoch 29
Epoch 1/1
18/18 [==============================] - 50s 3s/step - loss: 0.0695 - categorical_accuracy: 0.9850
train auc=0.96
200/200 [==============================] - 1s 7ms/step
categorical_accuracy: 90.50%
val auc=0.95
geoffwoollard commented 5 years ago

However with only part of the core missing vs the whole particle the model can't classify, not even train. The loss stalls. Perhaps need more data or a more complex model.

Named models/model-missing-half-J123-J128-256pix-1Mparams-ep-%s-, but actually 300pix.

title_stem = 'models/model-missing-half-J123-J128-256pix-1Mparams-ep-%s-'
for epoch in range(0,30):
    print('epoch %i' % epoch)
    model.fit_generator(fit_generator_helper.image_loader(df_train,batch_size=batch_size,nx=nx,ny=ny,crop_n=crop_n),
                    steps_per_epoch=steps_per_epoch, # steps_per_epoch is number of batches per epoch
                    epochs=1,
                   )
    roc_auc_train,_,_,_ = roc_auc(X_train,Y_train,model)
    print('train auc=%.2f'%roc_auc_train)

    scores = model.evaluate(X_val, Y_val)
    print("%s: %.2f%%" % (model.metrics_names[1], scores[1] * 100))

    roc_aucs[epoch],fprs[epoch], tprs[epoch], thresholds[epoch] = roc_auc(X_val,Y_val,model)
    print('val auc=%.2f'%roc_aucs[epoch])

    title=title_stem % epoch
    model_yaml = model.to_yaml()
    with open(title+timestr+'.yaml', "w") as yaml_file:
        yaml_file.write(model_yaml)
    model.save_weights(title+timestr+".h5")

epoch 0
Epoch 1/1
18/18 [==============================] - 158s 9s/step - loss: 0.9545 - categorical_accuracy: 0.4756
train auc=0.49
200/200 [==============================] - 2s 8ms/step
categorical_accuracy: 50.50%
val auc=0.44
epoch 1
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.7272 - categorical_accuracy: 0.4894
train auc=0.49
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.44
epoch 2
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.7041 - categorical_accuracy: 0.5056
train auc=0.54
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.44
epoch 3
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.7133 - categorical_accuracy: 0.4839
train auc=0.50
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.46
epoch 4
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.7084 - categorical_accuracy: 0.4872
train auc=0.49
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.45
epoch 5
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.7008 - categorical_accuracy: 0.4994
train auc=0.51
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 41.50%
val auc=0.44
epoch 6
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.7004 - categorical_accuracy: 0.4900
train auc=0.55
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 46.00%
val auc=0.37
epoch 7
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.7015 - categorical_accuracy: 0.4889
train auc=0.60
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 48.00%
val auc=0.38
epoch 8
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.7000 - categorical_accuracy: 0.4872
train auc=0.48
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.42
epoch 9
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6986 - categorical_accuracy: 0.4983
train auc=0.46
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.31
epoch 10
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6988 - categorical_accuracy: 0.4833
train auc=0.53
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.00%
val auc=0.21
epoch 11
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6936 - categorical_accuracy: 0.5194
train auc=0.48
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 49.50%
val auc=0.22
epoch 12
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6980 - categorical_accuracy: 0.4917
train auc=0.56
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.23
epoch 13
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6955 - categorical_accuracy: 0.5133
train auc=0.51
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.24
epoch 14
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6965 - categorical_accuracy: 0.4967
train auc=0.50
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 49.50%
val auc=0.35
epoch 15
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6938 - categorical_accuracy: 0.5161
train auc=0.49
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.26
epoch 16
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6939 - categorical_accuracy: 0.5117
train auc=0.52
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 44.00%
val auc=0.19
epoch 17
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6946 - categorical_accuracy: 0.5028
train auc=0.51
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 49.00%
val auc=0.22
epoch 18
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6950 - categorical_accuracy: 0.5039
train auc=0.60
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.13
epoch 19
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6948 - categorical_accuracy: 0.5122
train auc=0.50
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 42.50%
val auc=0.19
epoch 20
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6953 - categorical_accuracy: 0.5028
train auc=0.50
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.19
epoch 21
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6951 - categorical_accuracy: 0.5044
train auc=0.49
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.26
epoch 22
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6946 - categorical_accuracy: 0.5144
train auc=0.49
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.16
epoch 23
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6959 - categorical_accuracy: 0.4933
train auc=0.55
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 49.50%
val auc=0.16
epoch 24
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6934 - categorical_accuracy: 0.5106
train auc=0.50
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.11
epoch 25
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6932 - categorical_accuracy: 0.5100
train auc=0.53
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 47.00%
val auc=0.16
epoch 26
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6940 - categorical_accuracy: 0.5022
train auc=0.54
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 39.50%
val auc=0.26
epoch 27
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6929 - categorical_accuracy: 0.5139
train auc=0.51
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.15
epoch 28
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6934 - categorical_accuracy: 0.5156
train auc=0.50
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.22
epoch 29
Epoch 1/1
18/18 [==============================] - 29s 2s/step - loss: 0.6945 - categorical_accuracy: 0.5061
train auc=0.52
200/200 [==============================] - 1s 4ms/step
categorical_accuracy: 50.50%
val auc=0.14
geoffwoollard commented 5 years ago

We can get 70-80% validation roc auc on the full particle vs missing middle (like a hamburger, subtract the patty and leave the buns).


model = deepconsensus_layers_wrapper(
    input_shape=X_val.shape[1::],
    num_hidden_layers=4,

    conv2d1_k=(15,9,7,3),
    conv2d2_k=(15,9,7,3),
    conv2d1_n=(8,8,32,64),
    conv2d2_n=(8,16,32,64),
    mp_k=(3,3,3,4),
    mp_strides=(4,2,2,2),
  pooling_type=('max','max','max','av'),
  dense13_n=256,
    dropout13_rate=0.5
)
model.summary()

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 300, 300, 8)       1808      
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 300, 300, 8)       14408     
_________________________________________________________________
batch_normalization_1 (Batch (None, 300, 300, 8)       32        
_________________________________________________________________
activation_1 (Activation)    (None, 300, 300, 8)       0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 75, 75, 8)         0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 75, 75, 8)         5192      
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 75, 75, 16)        10384     
_________________________________________________________________
batch_normalization_2 (Batch (None, 75, 75, 16)        64        
_________________________________________________________________
activation_2 (Activation)    (None, 75, 75, 16)        0         
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 38, 38, 16)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 38, 38, 32)        25120     
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 38, 38, 32)        50208     
_________________________________________________________________
batch_normalization_3 (Batch (None, 38, 38, 32)        128       
_________________________________________________________________
activation_3 (Activation)    (None, 38, 38, 32)        0         
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 19, 19, 32)        0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 19, 19, 64)        18496     
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 19, 19, 64)        36928     
_________________________________________________________________
batch_normalization_4 (Batch (None, 19, 19, 64)        256       
_________________________________________________________________
activation_4 (Activation)    (None, 19, 19, 64)        0         
_________________________________________________________________
average_pooling2d_1 (Average (None, 10, 10, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 6400)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 256)               1638656   
_________________________________________________________________
dropout_1 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 514       
=================================================================
Total params: 1,802,194
Trainable params: 1,801,954
Non-trainable params: 240
_________________________________________________________________

title_stem = 'models/model-t20s-missing-patty-J128-J168-300pix-256dense-1.8Mparams-ep-%s-'
for epoch in range(0,2):
    print('epoch %i' % epoch)
    model.fit_generator(fit_generator_helper.image_loader(df_train,batch_size=batch_size,nx=nx,ny=ny,crop_n=crop_n),
                    steps_per_epoch=steps_per_epoch, # steps_per_epoch is number of batches per epoch
                    epochs=1,
                   )
    roc_auc_train,_,_,_ = roc_auc(X_train,Y_train,model)
    print('train auc=%.2f'%roc_auc_train)

    scores = model.evaluate(X_val, Y_val)
    print("%s: %.2f%%" % (model.metrics_names[1], scores[1] * 100))

    roc_aucs[epoch],fprs[epoch], tprs[epoch], thresholds[epoch] = roc_auc(X_val,Y_val,model)
    print('val auc=%.2f'%roc_aucs[epoch])

    title=title_stem % epoch
    model_yaml = model.to_yaml()
    with open(title+timestr+'.yaml', "w") as yaml_file:
        yaml_file.write(model_yaml)
    model.save_weights(title+timestr+".h5")

epoch 0
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/1
18/18 [==============================] - 316s 18s/step - loss: 0.9351 - categorical_accuracy: 0.5017
train auc=0.55
200/200 [==============================] - 4s 19ms/step
categorical_accuracy: 54.00%
val auc=0.46
epoch 1
Epoch 1/1
18/18 [==============================] - 32s 2s/step - loss: 0.6976 - categorical_accuracy: 0.5117
train auc=0.63
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 51.50%
val auc=0.46

epoch 2
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6917 - categorical_accuracy: 0.5100
train auc=0.67
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 55.00%
val auc=0.58
epoch 3
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6763 - categorical_accuracy: 0.5761
train auc=0.74
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 54.00%
val auc=0.66
epoch 4
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6091 - categorical_accuracy: 0.6650
train auc=0.89
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 54.00%
val auc=0.79
epoch 5
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4952 - categorical_accuracy: 0.7650
train auc=0.87
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 49.00%
val auc=0.77
epoch 6
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3992 - categorical_accuracy: 0.8117
train auc=0.87
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 49.50%
val auc=0.80
epoch 7
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3474 - categorical_accuracy: 0.8533
train auc=0.57
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 46.50%
val auc=0.55
epoch 8
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3279 - categorical_accuracy: 0.8594
train auc=0.67
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 46.00%
val auc=0.63
epoch 9
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.2792 - categorical_accuracy: 0.8811
train auc=0.69
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 46.50%
val auc=0.66
epoch 10
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.2526 - categorical_accuracy: 0.8994
train auc=0.76
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 51.50%
val auc=0.73
epoch 11
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.2458 - categorical_accuracy: 0.9006
train auc=0.86
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.50%
val auc=0.83
epoch 12
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3467 - categorical_accuracy: 0.8472
train auc=0.95
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 55.00%
val auc=0.83
epoch 13
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.2273 - categorical_accuracy: 0.9083
train auc=0.91
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 50.00%
val auc=0.84
epoch 14
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.1780 - categorical_accuracy: 0.9372
train auc=0.86
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 67.00%
val auc=0.79
epoch 15
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.1557 - categorical_accuracy: 0.9428
train auc=1.00
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 60.00%
val auc=0.84
epoch 16
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.1446 - categorical_accuracy: 0.9489
train auc=0.93
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 47.50%
val auc=0.82
epoch 17
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.1351 - categorical_accuracy: 0.9522
train auc=0.99
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 74.50%
val auc=0.80
epoch 18
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.0887 - categorical_accuracy: 0.9789
train auc=0.99
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 53.50%
val auc=0.85
epoch 19
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.0434 - categorical_accuracy: 0.9933
train auc=0.99
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 54.00%
val auc=0.74
geoffwoollard commented 5 years ago

For half the core region (missing4) a model with only 4 hidden layers and 256 dense fits slow. So bump up to 512 dense.

model = deepconsensus_layers_wrapper(
    input_shape=X_val.shape[1::],
    num_hidden_layers=4,

    conv2d1_k=(15,9,7,3),
    conv2d2_k=(15,9,7,3),
    conv2d1_n=(8,8,32,64),
    conv2d2_n=(8,16,32,64),
    mp_k=(3,3,3,4),
    mp_strides=(4,2,2,2),
  pooling_type=('max','max','max','av'),
  dense13_n=256,
    dropout13_rate=0.5
)
model.summary()

models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-2-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6966 - categorical_accuracy: 0.4944
train auc=0.54
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 44.50%
val auc=0.39
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-3-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6968 - categorical_accuracy: 0.5178
train auc=0.51
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 44.50%
val auc=0.40
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-4-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6954 - categorical_accuracy: 0.5017
train auc=0.49
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 42.50%
val auc=0.39
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-5-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6956 - categorical_accuracy: 0.4950
train auc=0.52
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 45.00%
val auc=0.42
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-6-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6932 - categorical_accuracy: 0.5139
train auc=0.55
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 42.50%
val auc=0.32
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-7-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6933 - categorical_accuracy: 0.5072
train auc=0.54
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 41.50%
val auc=0.24
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-8-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6926 - categorical_accuracy: 0.5156
train auc=0.59
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 44.00%
val auc=0.21
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-9-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6912 - categorical_accuracy: 0.5311
train auc=0.58
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 44.00%
val auc=0.24
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-10-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6896 - categorical_accuracy: 0.5256
train auc=0.60
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 38.50%
val auc=0.28
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-11-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6891 - categorical_accuracy: 0.5383
train auc=0.67
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 41.50%
val auc=0.34
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-12-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6762 - categorical_accuracy: 0.5878
train auc=0.69
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 51.00%
val auc=0.56
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-13-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6485 - categorical_accuracy: 0.6244
train auc=0.59
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.62
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-14-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6011 - categorical_accuracy: 0.6856
train auc=0.65
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.65
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-15-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.5739 - categorical_accuracy: 0.6928
train auc=0.61
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.62
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-16-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.5405 - categorical_accuracy: 0.7311
train auc=0.58
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.59
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-17-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.5177 - categorical_accuracy: 0.7478
train auc=0.54
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.54
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-18-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4986 - categorical_accuracy: 0.7639
train auc=0.46
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 47.50%
val auc=0.41
models/model-t20s-missingmiddle4-J128-J172-300pix-256dense-1.8Mparams-ep-19-20190402-2056
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4812 - categorical_accuracy: 0.7706
train auc=0.53
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 49.50%
val auc=0.48
geoffwoollard commented 5 years ago

Yes, now with a dense layer of 512 instead of 256 we can perfectly fit the training data. We get a validation auc of ~70%.

title_stem = 'models/model-t20s-missingmiddle4-J128-J172-300pix-512dense-1.8Mparams-ep-%s-'
for epoch in range(0,30):
    print('epoch %i' % epoch)
    model.fit_generator(fit_generator_helper.image_loader(df_train,batch_size=batch_size,nx=nx,ny=ny,crop_n=crop_n),
                    steps_per_epoch=steps_per_epoch, # steps_per_epoch is number of batches per epoch
                    epochs=1,
                   )
    roc_auc_train,_,_,_ = roc_auc(X_train,Y_train,model)
    print('train auc=%.2f'%roc_auc_train)

    scores = model.evaluate(X_val, Y_val)
    print("%s: %.2f%%" % (model.metrics_names[1], scores[1] * 100))

    roc_aucs[epoch],fprs[epoch], tprs[epoch], thresholds[epoch] = roc_auc(X_val,Y_val,model)
    print('val auc=%.2f'%roc_aucs[epoch])

    title=title_stem % epoch
    model_yaml = model.to_yaml()
    with open(title+timestr+'.yaml', "w") as yaml_file:
        yaml_file.write(model_yaml)
    model.save_weights(title+timestr+".h5")

epoch 0
Epoch 1/1
18/18 [==============================] - 40s 2s/step - loss: 1.0491 - categorical_accuracy: 0.4967
train auc=0.49
200/200 [==============================] - 4s 20ms/step
categorical_accuracy: 48.00%
val auc=0.46
epoch 1
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.7047 - categorical_accuracy: 0.5033
train auc=0.57
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 40.00%
val auc=0.35
epoch 2
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.7013 - categorical_accuracy: 0.5056
train auc=0.55
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.41
epoch 3
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.7002 - categorical_accuracy: 0.5044
train auc=0.59
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 47.00%
val auc=0.39
epoch 4
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6981 - categorical_accuracy: 0.4967
train auc=0.65
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 46.50%
val auc=0.33
epoch 5
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6907 - categorical_accuracy: 0.5417
train auc=0.64
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 45.50%
val auc=0.38
epoch 6
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6677 - categorical_accuracy: 0.6022
train auc=0.73
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 56.50%
val auc=0.66
epoch 7
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.6122 - categorical_accuracy: 0.6750
train auc=0.78
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.72
epoch 8
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.5672 - categorical_accuracy: 0.7078
train auc=0.80
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.72
epoch 9
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.5329 - categorical_accuracy: 0.7311
train auc=0.78
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 57.50%
val auc=0.69
epoch 10
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.5224 - categorical_accuracy: 0.7344
train auc=0.79
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 49.50%
val auc=0.68
epoch 11
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4759 - categorical_accuracy: 0.7694
train auc=0.77
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 52.50%
val auc=0.66
epoch 12
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4653 - categorical_accuracy: 0.7872
train auc=0.74
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 59.50%
val auc=0.63
epoch 13
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4601 - categorical_accuracy: 0.7811
train auc=0.73
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 55.00%
val auc=0.63
epoch 14
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4153 - categorical_accuracy: 0.8106
train auc=0.67
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 53.00%
val auc=0.57
epoch 15
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3938 - categorical_accuracy: 0.8294
train auc=0.60
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 52.00%
val auc=0.54
epoch 16
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4365 - categorical_accuracy: 0.7961
train auc=0.97
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 67.00%
val auc=0.77
epoch 17
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3693 - categorical_accuracy: 0.8389
train auc=0.86
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 54.00%
val auc=0.65
epoch 18
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3564 - categorical_accuracy: 0.8444
train auc=0.67
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 52.00%
val auc=0.53
epoch 19
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3092 - categorical_accuracy: 0.8761
train auc=0.98
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 52.50%
val auc=0.73
epoch 20
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.3792 - categorical_accuracy: 0.8372
train auc=0.65
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 53.00%
val auc=0.54
epoch 21
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.2522 - categorical_accuracy: 0.9028
train auc=0.62
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 52.00%
val auc=0.52
epoch 22
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.4442 - categorical_accuracy: 0.8022
train auc=0.97
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 60.50%
val auc=0.78
epoch 23
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.2368 - categorical_accuracy: 0.9122
train auc=0.95
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 53.50%
val auc=0.68
epoch 24
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.2428 - categorical_accuracy: 0.9139
train auc=0.71
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.00%
val auc=0.55
epoch 25
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.1529 - categorical_accuracy: 0.9589
train auc=0.95
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 49.00%
val auc=0.71
epoch 26
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.1445 - categorical_accuracy: 0.9450
train auc=0.95
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 48.50%
val auc=0.72
epoch 27
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.2545 - categorical_accuracy: 0.9006
train auc=0.88
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 52.50%
val auc=0.69
epoch 28
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.1002 - categorical_accuracy: 0.9717
train auc=0.90
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 49.00%
val auc=0.68
epoch 29
Epoch 1/1
18/18 [==============================] - 31s 2s/step - loss: 0.0672 - categorical_accuracy: 0.9811
train auc=1.00
200/200 [==============================] - 1s 5ms/step
categorical_accuracy: 65.00%
val auc=0.72