sdv-dev / TGAN

Generative adversarial training for generating synthetic tabular data.
MIT License
280 stars 91 forks source link

Why my model was retrained? #9

Closed harrytrinh2 closed 5 years ago

harrytrinh2 commented 5 years ago

After 4 hours of training, this is the log:

Total #vars=97, #params=8659360, size=33.03MB
[0327 10:19:47 @base.py:187] Setup callbacks graph ...
[0327 10:19:47 @base.py:205] Creating the session ...
[0327 10:19:47 @summary.py:38] Maintain moving average summary of 6 tensors in collection MOVING_SUMMARY_OPS.
[0327 10:19:47 @summary.py:75] Summarizing collection 'summaries' of size 9.
[0327 10:19:47 @graph.py:91] Applying collection UPDATE_OPS of 20 ops.
2019-03-27 10:19:47.567267: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
[0327 10:19:48 @base.py:205] Creating the session ...
[0327 10:19:48 @base.py:211] Initializing the session ...
[0327 10:19:48 @base.py:218] Graph Finalized.
[0327 10:19:48 @concurrency.py:37] Starting EnqueueThread QueueInput/input_queue ...
2019-03-27 10:19:48.785232: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
[0327 10:19:49 @base.py:250] Start Epoch 1 ...
  0%|                                                                                                                                                                                   |0/10000[00:00<?,?it/s]2019-03-27 10:19:49.075548: W tensorflow/core/framework/allocator.cc:122] Allocation of 20160000 exceeds 10% of system memory.
2019-03-27 10:19:49.084332: W tensorflow/core/framework/allocator.cc:122] Allocation of 20160000 exceeds 10% of system memory.
2019-03-27 10:19:49.109645: W tensorflow/core/framework/allocator.cc:122] Allocation of 20160000 exceeds 10% of system memory.
2019-03-27 10:19:49.114194: W tensorflow/core/framework/allocator.cc:122] Allocation of 20160000 exceeds 10% of system memory.
2019-03-27 10:19:49.188510: W tensorflow/core/framework/allocator.cc:122] Allocation of 20160000 exceeds 10% of system memory.
[0327 10:19:50 @base.py:211] Initializing the session ...
[0327 10:19:50 @base.py:218] Graph Finalized.
[0327 10:19:50 @concurrency.py:37] Starting EnqueueThread QueueInput/input_queue ...
[0327 10:19:50 @base.py:250] Start Epoch 1 ...
 23%|#######################################8                                                                                                                                    |2316/10000[11:16<37:13, 3.44i 10%|#################1                                                                                                                                                    |1035/10000[11:15<1:40:24, 1.49it/s] 23%|######################################9                                                                                                                                 |2318/10000[11:17<38:45, 3.30it/s] 10%|#################2                                                                                                                                                    |1038/10 23%|#########5                               |2326/10000[11:19<38:23, 3.33it/s]                                                                                 10%|####                                   |1039/10000[11:17<1:40:52, 1.48it/s]                                                                                 2100%|##############################################################|10000/10000[48:32<00:00, 0.88it/s]
[0327 11:08:21 @base.py:260] Epoch 1 (global_step 10000) finished, time:48 minutes 32 seconds.
 45%|############################3                                  |4501/10000[48:30<51:25, 1.78it/s][0327 11:08:21 @saver.py:77] Model saved to train_log/TGAN_synthesizer:KDD-2/model-10000.
[0327 11:08:21 @monitor.py:440] GAN_loss/discrim/accuracy_fake: 0.91
[0327 11:08:21 @monitor.py:440] GAN_loss/discrim/accuracy_real: 0.59
[0327 11:08:21 @monitor.py:440] GAN_loss/discrim/loss: 0.47944
[0327 11:08:21 @monitor.py:440] GAN_loss/gen/final-g-loss: 2.2348
[0327 11:08:21 @monitor.py:440] GAN_loss/gen/klloss: 0.12783
[0327 11:08:21 @monitor.py:440] GAN_loss/gen/loss: 2.107
[0327 11:08:21 @monitor.py:440] QueueInput/queue_size: 50
[0327 11:08:21 @base.py:250] Start Epoch 2 ...
 83%|############################################################################################################################################8                             |8284/10000[1:29:51<18:01, 1.59it/s]100%|###########################################################################################################################################################################|10000/10000[48:42<00:00, 3.59it/s]
[0327 11:57:04 @base.py:260] Epoch 2 (global_step 20000) finished, time:48 minutes 42 seconds.
[0327 11:57:04 @saver.py:77] Model saved to train_log/TGAN_synthesizer:KDD-2/model-20000.
[0327 11:57:04 @monitor.py:440] GAN_loss/discrim/accuracy_fake: 1
[0327 11:57:04 @monitor.py:440] GAN_loss/discrim/accuracy_real: 0.97
[0327 11:57:04 @monitor.py:440] GAN_loss/discrim/loss: 0.14634
[0327 11:57:04 @monitor.py:440] GAN_loss/gen/final-g-loss: 9.0766
[0327 11:57:04 @monitor.py:440] GAN_loss/gen/klloss: 0.28772
[0327 11:57:04 @monitor.py:440] GAN_loss/gen/loss: 8.7889
[0327 11:57:04 @monitor.py:440] QueueInput/queue_size: 50
[0327 11:57:04 @base.py:250] Start Epoch 3 ...
100%|#########################################################################################################################################################################|10000/10000[1:47:48<00:00, 0.31it/s]
[0327 12:07:39 @base.py:260] Epoch 1 (global_step 10000) finished, time:1 hour 47 minutes 48 seconds.
[0327 12:07:39 @saver.py:77] Model saved to train_log/TGAN_synthesizer:KDD-0/model-10000.
[0327 12:07:39 @monitor.py:440] GAN_loss/discrim/accuracy_fake: 1
[0327 12:07:39 @monitor.py:440] GAN_loss/discrim/accuracy_real: 0.98
[0327 12:07:39 @monitor.py:440] GAN_loss/discrim/loss: 0.25723
[0327 12:07:39 @monitor.py:440] GAN_loss/gen/final-g-loss: 2.6428
[0327 12:07:39 @monitor.py:440] GAN_loss/gen/klloss: 0.11113
[0327 12:07:39 @monitor.py:440] GAN_loss/gen/loss: 2.5316
[0327 12:07:39 @monitor.py:440] QueueInput/queue_size: 50
[0327 12:07:39 @base.py:250] Start Epoch 2 ...
100%|###########################################################################################################################################################################|10000/10000[46:23<00:00, 3.59it/s]
[0327 12:43:27 @base.py:260] Epoch 3 (global_step 30000) finished, time:46 minutes 23 seconds.
 34%|##########################################################6                                                                                                               |3448/10000[35:47<1:08:53, 1.59it/s][0327 12:43:27 @saver.py:77] Model saved to train_log/TGAN_synthesizer:KDD-2/model-30000.
[0327 12:43:27 @monitor.py:440] GAN_loss/discrim/accuracy_fake: 1
[0327 12:43:27 @monitor.py:440] GAN_loss/discrim/accuracy_real: 1
[0327 12:43:27 @monitor.py:440] GAN_loss/discrim/loss: 0.10424
[0327 12:43:27 @monitor.py:440] GAN_loss/gen/final-g-loss: 7.3784
[0327 12:43:27 @monitor.py:440] GAN_loss/gen/klloss: 0.20924
[0327 12:43:27 @monitor.py:440] GAN_loss/gen/loss: 7.1691
[0327 12:43:27 @monitor.py:440] QueueInput/queue_size: 50
[0327 12:43:27 @base.py:250] Start Epoch 4 ...
 32%|#######################################################                                                                                                                     |3198/10000[14:53<32:39, 3. 49%|##############################7                                |4879/10000[50:41<56:52, 1.50it/s]                                                                                                       32%|####################1                                          |3200/10000[14:54<33:42, 3.36it/s]                                                                                                       32%|####################2   49%|####################                     |4883/10000[50:44<55:06, 1.55it/s] 32%|#############1                           |3210/10000[14:57<33:45, 3.35it/s] 32%|###########100%|##############################################################|10000/10000[48:27<00:00, 3.21it/s]
[0327 13:31:55 @base.py:260] Epoch 4 (global_step 40000) finished, time:48 minutes 27 seconds.
[0327 13:31:55 @saver.py:77] Model saved to train_log/TGAN_synthesizer:KDD-2/model-40000.
[0327 13:31:55 @monitor.py:440] GAN_loss/discrim/accuracy_fake: 0.94
[0327 13:31:55 @monitor.py:440] GAN_loss/discrim/accuracy_real: 1
[0327 13:31:55 @monitor.py:440] GAN_loss/discrim/loss: 0.19482
[0327 13:31:55 @monitor.py:440] GAN_loss/gen/final-g-loss: 3.2029
[0327 13:31:55 @monitor.py:440] GAN_loss/gen/klloss: 0.32491
[0327 13:31:55 @monitor.py:440] GAN_loss/gen/loss: 2.878
[0327 13:31:55 @monitor.py:440] QueueInput/queue_size: 50
[0327 13:31:55 @base.py:250] Start Epoch 5 ...
100%|############################################################|10000/10000[1:48:07<00:00, 1.59it/s]
[0327 13:55:47 @base.py:260] Epoch 2 (global_step 20000) finished, time:1 hour 48 minutes 7 seconds.
[0327 13:55:48 @saver.py:77] Model saved to train_log/TGAN_synthesizer:KDD-0/model-20000.
[0327 13:55:48 @monitor.py:440] GAN_loss/discrim/accuracy_fake: 0.96
[0327 13:55:48 @monitor.py:440] GAN_loss/discrim/accuracy_real: 1
[0327 13:55:48 @monitor.py:440] GAN_loss/discrim/loss: 0.18884
[0327 13:55:48 @monitor.py:440] GAN_loss/gen/final-g-loss: 7.0647
[0327 13:55:48 @monitor.py:440] GAN_loss/gen/klloss: 2.444
[0327 13:55:48 @monitor.py:440] GAN_loss/gen/loss: 4.6207
[0327 13:55:48 @monitor.py:440] QueueInput/queue_size: 50
[0327 13:55:48 @base.py:250] Start Epoch 3 ...
100%|##############################################################|10000/10000[50:46<00:00, 3.48it/s]
[0327 14:22:41 @base.py:260] Epoch 5 (global_step 50000) finished, time:50 minutes 46 seconds.
[0327 14:22:41 @saver.py:77] Model saved to train_log/TGAN_synthesizer:KDD-2/model-50000.
[0327 14:22:41 @monitor.py:440] GAN_loss/discrim/accuracy_fake: 0.99
[0327 14:22:41 @monitor.py:440] GAN_loss/discrim/accuracy_real: 1
[0327 14:22:41 @monitor.py:440] GAN_loss/discrim/loss: 0.13903
[0327 14:22:41 @monitor.py:440] GAN_loss/gen/final-g-loss: 4.2788
[0327 14:22:41 @monitor.py:440] GAN_loss/gen/klloss: 0.28196
[0327 14:22:41 @monitor.py:440] GAN_loss/gen/loss: 3.9968
[0327 14:22:41 @monitor.py:440] QueueInput/queue_size: 50
[0327 14:22:41 @base.py:264] Training has finished!
 24%|##############7                                              |2420/10000[26:53<1:20:32, 1.57it/s]2019-03-27 14:22:41.924330: W tensorflow/core/kernels/queue_base.cc:277] _0_QueueInput/input_queue: Skipping cancelled enqueue attempt with queue not closed
`

The model was automatically retrained and it showed:

[0327 14:23:08 @collection.py:145] New collections created in tower : tf.GraphKeys.REGULARIZATION_LOSSES
[0327 14:23:08 @collection.py:164] These collections were modified but restored in : (tf.GraphKeys.SUMMARIES: 0->2)
[0327 14:23:08 @sessinit.py:90] WRN The following variables are in the checkpoint, but not found in the graph: global_step:0, optimize/beta1_power:0, optimize/beta2_power:0
2019-03-27 14:23:08.329825: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
 25%|###############                                              |2469/10000[27:20<1:07:26, 1.86it/s][0327 14:23:08 @sessinit.py:117] Restoring checkpoint from train_log/TGAN_synthesizer:KDD-2/model-40000 ...
 25%|###############2                                             |2494/10000[27:34<1:05:35, 1.91it/s][0327 14:23:22 @develop.py:96] WRN [Deprecated] ModelDescBase._get_inputs() interface will be deprecated after 30 Mar. Use inputs() instead!
[0327 14:23:22 @develop.py:96] WRN [Deprecated] ModelDescBase._build_graph() interface will be deprecated after 30 Mar. Use build_graph() instead!
[0327 14:23:22 @registry.py:121] gen/LSTM/00/FC input: [100, 200]
[0327 14:23:22 @registry.py:129] gen/LSTM/00/FC output: [100, 500]
[0327 14:23:22 @registry.py:121] gen/LSTM/00/FC2 input: [100, 500]
[0327 14:23:22 @registry.py:129] gen/LSTM/00/FC2 output: [100, 1]
 25%|###############2                                             |2495/10000[27:34<1:07:05, 1.86it/s]WARNING:tensorflow:From src/TGAN_synthesizer.py:71: calling softmax (from tensorflow.python.ops.nn_ops) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
[0327 14:23:22 @registry.py:121] gen/LSTM/01/FC input: [100, 200]
[0327 14:23:22 @registry.py:129] gen/LSTM/01/FC output: [100, 500]
[0327 14:23:22 @registry.py:121] gen/LSTM/01/FC2 input: [100, 500]
[0327 14:23:22 @registry.py:129] gen/LSTM/01/FC2 output: [100, 5]
[0327 14:23:22 @registry.py:121] gen/LSTM/01/FC3 input: [100, 5]
[0327 14:23:22 @registry.py:129] gen/LSTM/01/FC3 output: [100, 500]
[0327 14:23:22 @registry.py:121] gen/LSTM/02/FC input: [100, 200]
[0327 14:23:22 @registry.py:129] gen/LSTM/02/FC output: [100, 500]
[0327 14:23:22 @registry.py:121] gen/LSTM/02/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/02/FC2 output: [100, 3]
[0327 14:23:23 @registry.py:121] gen/LSTM/02/FC3 input: [100, 3]
[0327 14:23:23 @registry.py:129] gen/LSTM/02/FC3 output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/03/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/03/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/03/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/03/FC2 output: [100, 1]
[0327 14:23:23 @registry.py:121] gen/LSTM/04/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/04/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/04/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/04/FC2 output: [100, 5]
[0327 14:23:23 @registry.py:121] gen/LSTM/04/FC3 input: [100, 5]
[0327 14:23:23 @registry.py:129] gen/LSTM/04/FC3 output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/05/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/05/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/05/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/05/FC2 output: [100, 1]
[0327 14:23:23 @registry.py:121] gen/LSTM/06/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/06/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/06/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/06/FC2 output: [100, 5]
[0327 14:23:23 @registry.py:121] gen/LSTM/06/FC3 input: [100, 5]
[0327 14:23:23 @registry.py:129] gen/LSTM/06/FC3 output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/07/FC input: [100, 200]
 25%|###############2                                             |2496/10000[27:35<1:07:37, 1.85it/s][0327 14:23:23 @registry.py:129] gen/LSTM/07/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/07/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/07/FC2 output: [100, 1]
[0327 14:23:23 @registry.py:121] gen/LSTM/08/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/08/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/08/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/08/FC2 output: [100, 5]
[0327 14:23:23 @registry.py:121] gen/LSTM/08/FC3 input: [100, 5]
[0327 14:23:23 @registry.py:129] gen/LSTM/08/FC3 output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/09/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/09/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/09/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/09/FC2 output: [100, 1]
[0327 14:23:23 @registry.py:121] gen/LSTM/10/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/10/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/10/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/10/FC2 output: [100, 5]
[0327 14:23:23 @registry.py:121] gen/LSTM/10/FC3 input: [100, 5]
[0327 14:23:23 @registry.py:129] gen/LSTM/10/FC3 output: [100, 500]
[0327 14:23:23 @registry.py:121] discrim/dis_fc0/fc input: [100, 33]
[0327 14:23:23 @registry.py:129] discrim/dis_fc0/fc output: [100, 500]
[0327 14:23:23 @registry.py:121] discrim/dis_fc0/fc_diversity input: [100, 500]
[0327 14:23:23 @registry.py:129] discrim/dis_fc0/fc_diversity output: [100, 100]
[0327 14:23:23 @registry.py:121] discrim/dis_fc1/fc input: [100, 510]
[0327 14:23:23 @registry.py:129] discrim/dis_fc1/fc output: [100, 500]
[0327 14:23:23 @registry.py:121] discrim/dis_fc1/fc_diversity input: [100, 500]
[0327 14:23:23 @registry.py:129] discrim/dis_fc1/fc_diversity output: [100, 100]
[0327 14:23:23 @registry.py:121] discrim/dis_fc2/fc input: [100, 510]
[0327 14:23:23 @registry.py:129] discrim/dis_fc2/fc output: [100, 500]
[0327 14:23:23 @registry.py:121] discrim/dis_fc2/fc_diversity input: [100, 500]
[0327 14:23:23 @registry.py:129] discrim/dis_fc2/fc_diversity output: [100, 100]
[0327 14:23:23 @registry.py:121] discrim/dis_fc_top input: [100, 510]
[0327 14:23:23 @registry.py:129] discrim/dis_fc_top output: [100, 1]
 25%|###############2                                             |2497/10000[27:35<1:08:02, 1.84it/s][0327 14:23:24 @collection.py:145] New collections created in tower : tf.GraphKeys.REGULARIZATION_LOSSES
[0327 14:23:24 @collection.py:164] These collections were modified but restored in : (tf.GraphKeys.SUMMARIES: 0->2)
[0327 14:23:24 @sessinit.py:90] WRN The following variables are in the checkpoint, but not found in the graph: global_step:0, optimize/beta1_power:0, optimize/beta2_power:0
 25%|###############2                                             |2498/10000[27:36<1:08:27, 1.83it/s]2019-03-27 14:23:24.578021: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
[0327 14:23:24 @sessinit.py:117] Restoring checkpoint from train_log/TGAN_synthesizer:KDD-2/model-50000 ...
 25%|###############3                                             |2523/10000[27:50<1:07:24, 1.85it/s][0327 14:23:39 @logger.py:74] Argv: src/TGAN_synthesizer.py --batch_size 50 --z_dim 100 --num_gen_rnn 400 --num_gen_feature 300 --num_dis_layers 4 --num_dis_hidden 400 --learning_rate 0.001 --noise 0.1 --exp_name KDD-3 --max_epoch 5 --steps_per_epoch 10000 --data expdir/KDD/train.npz --gpu 0
 25%|###############3                                             |2524/10000[27:51<1:11:50, 1.73it/s][0327 14:23:39 @develop.py:96] WRN [Deprecated] ModelDescBase._get_inputs() interface will be deprecated after 30 Mar. Use inputs() instead!
[0327 14:23:39 @input_source.py:221] Setting up the queue 'QueueInput/input_queue' for CPU prefetching ...
[0327 14:23:39 @develop.py:96] WRN [Deprecated] ModelDescBase._build_graph() interface will be deprecated after 30 Mar. Use build_graph() instead!
[0327 14:23:39 @registry.py:121] gen/LSTM/00/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/00/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/00/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/00/FC2 output: [50, 1]
WARNING:tensorflow:From src/TGAN_synthesizer.py:71: calling softmax (from tensorflow.python.ops.nn_ops) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
[0327 14:23:39 @registry.py:121] gen/LSTM/01/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/01/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/01/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/01/FC2 output: [50, 5]
[0327 14:23:39 @registry.py:121] gen/LSTM/01/FC3 input: [50, 5]
[0327 14:23:39 @registry.py:129] gen/LSTM/01/FC3 output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/02/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/02/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/02/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/02/FC2 output: [50, 3]
[0327 14:23:39 @registry.py:121] gen/LSTM/02/FC3 input: [50, 3]
 25%|###############4                                             |2525/10000[27:51<1:11:56, 1.73it/s][0327 14:23:39 @registry.py:129] gen/LSTM/02/FC3 output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/03/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/03/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/03/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/03/FC2 output: [50, 1]
[0327 14:23:39 @registry.py:121] gen/LSTM/04/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/04/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/04/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/04/FC2 output: [50, 5]
[0327 14:23:39 @registry.py:121] gen/LSTM/04/FC3 input: [50, 5]
[0327 14:23:39 @registry.py:129] gen/LSTM/04/FC3 output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/05/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/05/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/05/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/05/FC2 output: [50, 1]
[0327 14:23:39 @registry.py:121] gen/LSTM/06/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/06/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/06/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/06/FC2 output: [50, 5]
[0327 14:23:39 @registry.py:121] gen/LSTM/06/FC3 input: [50, 5]
[0327 14:23:39 @registry.py:129] gen/LSTM/06/FC3 output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/07/FC input: [50, 400]
[0327 14:23:40 @registry.py:129] gen/LSTM/07/FC output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/07/FC2 input: [50, 300]
[0327 14:23:40 @registry.py:129] gen/LSTM/07/FC2 output: [50, 1]
[0327 14:23:40 @registry.py:121] gen/LSTM/08/FC input: [50, 400]
[0327 14:23:40 @registry.py:129] gen/LSTM/08/FC output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/08/FC2 input: [50, 300]
[0327 14:23:40 @registry.py:129] gen/LSTM/08/FC2 output: [50, 5]
[0327 14:23:40 @registry.py:121] gen/LSTM/08/FC3 input: [50, 5]
[0327 14:23:40 @registry.py:129] gen/LSTM/08/FC3 output: [50, 300]
 25%|###############4                                             |2526/10000[27:52<1:11:04, 1.75it/s][0327 14:23:40 @registry.py:121] gen/LSTM/09/FC input: [50, 400]
[0327 14:23:40 @registry.py:129] gen/LSTM/09/FC output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/09/FC2 input: [50, 300]
[0327 14:23:40 @registry.py:129] gen/LSTM/09/FC2 output: [50, 1]
[0327 14:23:40 @registry.py:121] gen/LSTM/10/FC input: [50, 400]
[0327 14:23:40 @registry.py:129] gen/LSTM/10/FC output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/10/FC2 input: [50, 300]
[0327 14:23:40 @registry.py:129] gen/LSTM/10/FC2 output: [50, 5]
[0327 14:23:40 @registry.py:121] gen/LSTM/10/FC3 input: [50, 5]
[0327 14:23:40 @registry.py:129] gen/LSTM/10/FC3 output: [50, 300]
WARNING:tensorflow:From src/TGAN_synthesizer.py:171: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
[0327 14:23:40 @registry.py:121] discrim/dis_fc0/fc input: [50, 33]
[0327 14:23:40 @registry.py:129] discrim/dis_fc0/fc output: [50, 400]
[0327 14:23:40 @registry.py:121] discrim/dis_fc0/fc_diversity input: [50, 400]
[0327 14:23:40 @registry.py:129] discrim/dis_fc0/fc_diversity output: [50, 100]
[0327 14:23:40 @registry.py:121] discrim/dis_fc1/fc input: [50, 410]
[0327 14:23:40 @registry.py:129] discrim/dis_fc1/fc output: [50, 400]
[0327 14:23:40 @registry.py:121] discrim/dis_fc1/fc_diversity input: [50, 400]
[0327 14:23:40 @registry.py:129] discrim/dis_fc1/fc_diversity output: [50, 100]
[0327 14:23:40 @registry.py:121] discrim/dis_fc2/fc input: [50, 410]
[0327 14:23:40 @registry.py:129] discrim/dis_fc2/fc output: [50, 400]
[0327 14:23:40 @registry.py:121] discrim/dis_fc2/fc_diversity input: [50, 400]
[0327 14:23:40 @registry.py:129] discrim/dis_fc2/fc_diversity output: [50, 100]
[0327 14:23:40 @registry.py:121] discrim/dis_fc3/fc input: [50, 410]
[0327 14:23:40 @registry.py:129] discrim/dis_fc3/fc output: [50, 400]
[0327 14:23:40 @registry.py:121] discrim/dis_fc3/fc_diversity input: [50, 400]
[0327 14:23:40 @registry.py:129] discrim/dis_fc3/fc_diversity output: [50, 100]
 25%|###############4                                             |2527/10000[27:52<1:10:00, 1.78it/s][0327 14:23:40 @registry.py:121] discrim/dis_fc_top input: [50, 410]
[0327 14:23:40 @registry.py:129] discrim/dis_fc_top output: [50, 1]
 25%|###############4                                             |2538/10000[27:58<1:06:06, 1.88it/s][0327 14:23:46 @model_utils.py:64] Trainable Variables: 
name                              shape             dim
--------------------------------  ------------  -------
gen/LSTM/go:0                     [1, 300]          300
gen/LSTM/lstm_cell/kernel:0       [1200, 1600]  1920000
gen/LSTM/lstm_cell/bias:0         [1600]           1600
gen/LSTM/00/FC/W:0                [400, 300]     120000
gen/LSTM/00/FC/b:0                [300]             300
gen/LSTM/00/FC2/W:0               [300, 1]          300
gen/LSTM/00/FC2/b:0               [1]                 1
gen/LSTM/00/attw:0                [1, 1, 1]           1
gen/LSTM/01/FC/W:0                [400, 300]     120000
gen/LSTM/01/FC/b:0                [300]             300
gen/LSTM/01/FC2/W:0               [300, 5]         1500
gen/LSTM/01/FC2/b:0               [5]                 5
gen/LSTM/01/FC3/W:0               [5, 300]         1500
gen/LSTM/01/FC3/b:0               [300]             300
gen/LSTM/01/attw:0                [2, 1, 1]           2
gen/LSTM/02/FC/W:0                [400, 300]     120000
gen/LSTM/02/FC/b:0                [300]             300
gen/LSTM/02/FC2/W:0               [300, 3]          900
gen/LSTM/02/FC2/b:0               [3]                 3
gen/LSTM/02/FC3/W:0               [3, 300]          900
gen/LSTM/02/FC3/b:0               [300]             300
gen/LSTM/02/attw:0                [3, 1, 1]           3
gen/LSTM/03/FC/W:0                [400, 300]     120000
gen/LSTM/03/FC/b:0                [300]             300
gen/LSTM/03/FC2/W:0               [300, 1]          300
gen/LSTM/03/FC2/b:0               [1]                 1
gen/LSTM/03/attw:0                [4, 1, 1]           4
gen/LSTM/04/FC/W:0                [400, 300]     120000
gen/LSTM/04/FC/b:0                [300]             300
gen/LSTM/04/FC2/W:0               [300, 5]         1500
gen/LSTM/04/FC2/b:0               [5]                 5
gen/LSTM/04/FC3/W:0               [5, 300]         1500
gen/LSTM/04/FC3/b:0               [300]             300
gen/LSTM/04/attw:0                [5, 1, 1]           5
gen/LSTM/05/FC/W:0                [400, 300]     120000
gen/LSTM/05/FC/b:0                [300]             300
gen/LSTM/05/FC2/W:0               [300, 1]          300
gen/LSTM/05/FC2/b:0               [1]                 1
gen/LSTM/05/attw:0                [6, 1, 1]           6
gen/LSTM/06/FC/W:0                [400, 300]     120000
gen/LSTM/06/FC/b:0                [300]             300
gen/LSTM/06/FC2/W:0               [300, 5]         1500
gen/LSTM/06/FC2/b:0               [5]                 5
gen/LSTM/06/FC3/W:0               [5, 300]         1500
gen/LSTM/06/FC3/b:0               [300]             300
gen/LSTM/06/attw:0                [7, 1, 1]           7
gen/LSTM/07/FC/W:0                [400, 300]     120000
gen/LSTM/07/FC/b:0                [300]             300
gen/LSTM/07/FC2/W:0               [300, 1]          300
gen/LSTM/07/FC2/b:0               [1]                 1
gen/LSTM/07/attw:0                [8, 1, 1]           8
gen/LSTM/08/FC/W:0                [400, 300]     120000
gen/LSTM/08/FC/b:0                [300]             300
gen/LSTM/08/FC2/W:0               [300, 5]         1500
gen/LSTM/08/FC2/b:0               [5]                 5
gen/LSTM/08/FC3/W:0               [5, 300]         1500
gen/LSTM/08/FC3/b:0               [300]             300
gen/LSTM/08/attw:0                [9, 1, 1]           9
gen/LSTM/09/FC/W:0                [400, 300]     120000
gen/LSTM/09/FC/b:0                [300]             300
gen/LSTM/09/FC2/W:0               [300, 1]          300
gen/LSTM/09/FC2/b:0               [1]                 1
gen/LSTM/09/attw:0                [10, 1, 1]         10
gen/LSTM/10/FC/W:0                [400, 300]     120000
gen/LSTM/10/FC/b:0                [300]             300
gen/LSTM/10/FC2/W:0               [300, 5]         1500
gen/LSTM/10/FC2/b:0               [5]                 5
gen/LSTM/10/FC3/W:0               [5, 300]         1500
gen/LSTM/10/FC3/b:0               [300]             300
gen/LSTM/10/attw:0                [11, 1, 1]         11
discrim/dis_fc0/fc/W:0            [33, 400]       13200
discrim/dis_fc0/fc/b:0            [400]             400
discrim/dis_fc0/fc_diversity/W:0  [400, 100]      40000
discrim/dis_fc0/fc_diversity/b:0  [100]             100
discrim/dis_fc0/bn/beta:0         [410]             410
discrim/dis_fc1/fc/W:0            [410, 400]     164000
discrim/dis_fc1/fc/b:0            [400]             400
discrim/dis_fc1/fc_diversity/W:0  [400, 100]      40000
discrim/dis_fc1/fc_diversity/b:0  [100]             100
discrim/dis_fc1/bn/beta:0         [410]             410
discrim/dis_fc2/fc/W:0            [410, 400]     164000
discrim/dis_fc2/fc/b:0            [400]             400
discrim/dis_fc2/fc_diversity/W:0  [400, 100]      40000
discrim/dis_fc2/fc_diversity/b:0  [100]             100
discrim/dis_fc2/bn/beta:0         [410]             410
discrim/dis_fc3/fc/W:0            [410, 400]     164000
discrim/dis_fc3/fc/b:0            [400]             400
discrim/dis_fc3/fc_diversity/W:0  [400, 100]      40000
discrim/dis_fc3/fc_diversity/b:0  [100]             100
discrim/dis_fc3/bn/beta:0         [410]             410
discrim/dis_fc_top/W:0            [410, 1]          410
discrim/dis_fc_top/b:0            [1]                 1
Total #vars=92, #params=3934650, size=15.01MB
[0327 14:23:46 @base.py:187] Setup callbacks graph ...
[0327 14:23:46 @summary.py:38] Maintain moving average summary of 6 tensors in collection MOVING_SUMMARY_OPS.
[0327 14:23:46 @summary.py:75] Summarizing collection 'summaries' of size 9.
[0327 14:23:46 @graph.py:91] Applying collection UPDATE_OPS of 16 ops.
 25%|###############4                                             |2540/10000[27:59<1:08:34, 1.81it/s][0327 14:23:48 @base.py:205] Creating the session ...FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/06/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/06/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/06/FC2 output: [100, 5]
[0327 14:23:23 @registry.py:121] gen/LSTM/06/FC3 input: [100, 5]
[0327 14:23:23 @registry.py:129] gen/LSTM/06/FC3 output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/07/FC input: [100, 200]
 25%|###############2                                             |2496/10000[27:35<1:07:37, 1.85it/s][0327 14:23:23 @registry.py:129] gen/LSTM/07/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/07/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/07/FC2 output: [100, 1]
[0327 14:23:23 @registry.py:121] gen/LSTM/08/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/08/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/08/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/08/FC2 output: [100, 5]
[0327 14:23:23 @registry.py:121] gen/LSTM/08/FC3 input: [100, 5]
[0327 14:23:23 @registry.py:129] gen/LSTM/08/FC3 output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/09/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/09/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/09/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/09/FC2 output: [100, 1]
[0327 14:23:23 @registry.py:121] gen/LSTM/10/FC input: [100, 200]
[0327 14:23:23 @registry.py:129] gen/LSTM/10/FC output: [100, 500]
[0327 14:23:23 @registry.py:121] gen/LSTM/10/FC2 input: [100, 500]
[0327 14:23:23 @registry.py:129] gen/LSTM/10/FC2 output: [100, 5]
[0327 14:23:23 @registry.py:121] gen/LSTM/10/FC3 input: [100, 5]
[0327 14:23:23 @registry.py:129] gen/LSTM/10/FC3 output: [100, 500]
[0327 14:23:23 @registry.py:121] discrim/dis_fc0/fc input: [100, 33]
[0327 14:23:23 @registry.py:129] discrim/dis_fc0/fc output: [100, 500]
[0327 14:23:23 @registry.py:121] discrim/dis_fc0/fc_diversity input: [100, 500]
[0327 14:23:23 @registry.py:129] discrim/dis_fc0/fc_diversity output: [100, 100]
[0327 14:23:23 @registry.py:121] discrim/dis_fc1/fc input: [100, 510]
[0327 14:23:23 @registry.py:129] discrim/dis_fc1/fc output: [100, 500]
[0327 14:23:23 @registry.py:121] discrim/dis_fc1/fc_diversity input: [100, 500]
[0327 14:23:23 @registry.py:129] discrim/dis_fc1/fc_diversity output: [100, 100]
[0327 14:23:23 @registry.py:121] discrim/dis_fc2/fc input: [100, 510]
[0327 14:23:23 @registry.py:129] discrim/dis_fc2/fc output: [100, 500]
[0327 14:23:23 @registry.py:121] discrim/dis_fc2/fc_diversity input: [100, 500]
[0327 14:23:23 @registry.py:129] discrim/dis_fc2/fc_diversity output: [100, 100]
[0327 14:23:23 @registry.py:121] discrim/dis_fc_top input: [100, 510]
[0327 14:23:23 @registry.py:129] discrim/dis_fc_top output: [100, 1]
 25%|###############2                                             |2497/10000[27:35<1:08:02, 1.84it/s][0327 14:23:24 @collection.py:145] New collections created in tower : tf.GraphKeys.REGULARIZATION_LOSSES
[0327 14:23:24 @collection.py:164] These collections were modified but restored in : (tf.GraphKeys.SUMMARIES: 0->2)
[0327 14:23:24 @sessinit.py:90] WRN The following variables are in the checkpoint, but not found in the graph: global_step:0, optimize/beta1_power:0, optimize/beta2_power:0
 25%|###############2                                             |2498/10000[27:36<1:08:27, 1.83it/s]2019-03-27 14:23:24.578021: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
[0327 14:23:24 @sessinit.py:117] Restoring checkpoint from train_log/TGAN_synthesizer:KDD-2/model-50000 ...
 25%|###############3                                             |2523/10000[27:50<1:07:24, 1.85it/s][0327 14:23:39 @logger.py:74] Argv: src/TGAN_synthesizer.py --batch_size 50 --z_dim 100 --num_gen_rnn 400 --num_gen_feature 300 --num_dis_layers 4 --num_dis_hidden 400 --learning_rate 0.001 --noise 0.1 --exp_name KDD-3 --max_epoch 5 --steps_per_epoch 10000 --data expdir/KDD/train.npz --gpu 0
 25%|###############3                                             |2524/10000[27:51<1:11:50, 1.73it/s][0327 14:23:39 @develop.py:96] WRN [Deprecated] ModelDescBase._get_inputs() interface will be deprecated after 30 Mar. Use inputs() instead!
[0327 14:23:39 @input_source.py:221] Setting up the queue 'QueueInput/input_queue' for CPU prefetching ...
[0327 14:23:39 @develop.py:96] WRN [Deprecated] ModelDescBase._build_graph() interface will be deprecated after 30 Mar. Use build_graph() instead!
[0327 14:23:39 @registry.py:121] gen/LSTM/00/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/00/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/00/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/00/FC2 output: [50, 1]
WARNING:tensorflow:From src/TGAN_synthesizer.py:71: calling softmax (from tensorflow.python.ops.nn_ops) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
[0327 14:23:39 @registry.py:121] gen/LSTM/01/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/01/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/01/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/01/FC2 output: [50, 5]
[0327 14:23:39 @registry.py:121] gen/LSTM/01/FC3 input: [50, 5]
[0327 14:23:39 @registry.py:129] gen/LSTM/01/FC3 output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/02/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/02/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/02/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/02/FC2 output: [50, 3]
[0327 14:23:39 @registry.py:121] gen/LSTM/02/FC3 input: [50, 3]
 25%|###############4                                             |2525/10000[27:51<1:11:56, 1.73it/s][0327 14:23:39 @registry.py:129] gen/LSTM/02/FC3 output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/03/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/03/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/03/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/03/FC2 output: [50, 1]
[0327 14:23:39 @registry.py:121] gen/LSTM/04/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/04/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/04/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/04/FC2 output: [50, 5]
[0327 14:23:39 @registry.py:121] gen/LSTM/04/FC3 input: [50, 5]
[0327 14:23:39 @registry.py:129] gen/LSTM/04/FC3 output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/05/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/05/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/05/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/05/FC2 output: [50, 1]
[0327 14:23:39 @registry.py:121] gen/LSTM/06/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/06/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/06/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/06/FC2 output: [50, 5]
[0327 14:23:39 @registry.py:121] gen/LSTM/06/FC3 input: [50, 5]
[0327 14:23:39 @registry.py:129] gen/LSTM/06/FC3 output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/07/FC input: [50, 400]
[0327 14:23:40 @registry.py:129] gen/LSTM/07/FC output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/07/FC2 input: [50, 300]
[0327 14:23:40 @registry.py:129] gen/LSTM/07/FC2 output: [50, 1]
[0327 14:23:40 @registry.py:121] gen/LSTM/08/FC input: [50, 400]
[0327 14:23:40 @registry.py:129] gen/LSTM/08/FC output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/08/FC2 input: [50, 300]
[0327 14:23:40 @registry.py:129] gen/LSTM/08/FC2 output: [50, 5]
[0327 14:23:40 @registry.py:121] gen/LSTM/08/FC3 input: [50, 5]
[0327 14:23:40 @registry.py:129] gen/LSTM/08/FC3 output: [50, 300]
 25%|###############4                                             |2526/10000[27:52<1:11:04, 1.75it/s][0327 14:23:40 @registry.py:121] gen/LSTM/09/FC input: [50, 400]
[0327 14:23:40 @registry.py:129] gen/LSTM/09/FC output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/09/FC2 input: [50, 300]
[0327 14:23:40 @registry.py:129] gen/LSTM/09/FC2 output: [50, 1]
[0327 14:23:40 @registry.py:121] gen/LSTM/10/FC input: [50, 400]
[0327 14:23:40 @registry.py:129] gen/LSTM/10/FC output: [50, 300]
[0327 14:23:40 @registry.py:121] gen/LSTM/10/FC2 input: [50, 300]
[0327 14:23:40 @registry.py:129] gen/LSTM/10/FC2 output: [50, 5]
[0327 14:23:40 @registry.py:121] gen/LSTM/10/FC3 input: [50, 5]
[0327 14:23:40 @registry.py:129] gen/LSTM/10/FC3 output: [50, 300]
WARNING:tensorflow:From src/TGAN_synthesizer.py:171: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
[0327 14:23:40 @registry.py:121] discrim/dis_fc0/fc input: [50, 33]
[0327 14:23:40 @registry.py:129] discrim/dis_fc0/fc output: [50, 400]
[0327 14:23:40 @registry.py:121] discrim/dis_fc0/fc_diversity input: [50, 400]
[0327 14:23:40 @registry.py:129] discrim/dis_fc0/fc_diversity output: [50, 100]
[0327 14:23:40 @registry.py:121] discrim/dis_fc1/fc input: [50, 410]
[0327 14:23:40 @registry.py:129] discrim/dis_fc1/fc output: [50, 400]
[0327 14:23:40 @registry.py:121] discrim/dis_fc1/fc_diversity input: [50, 400]
[0327 14:23:40 @registry.py:129] discrim/dis_fc1/fc_diversity output: [50, 100]
[0327 14:23:40 @registry.py:121] discrim/dis_fc2/fc input: [50, 410]
[0327 14:23:40 @registry.py:129] discrim/dis_fc2/fc output: [50, 400]
[0327 14:23:40 @registry.py:121] discrim/dis_fc2/fc_diversity input: [50, 400]
[0327 14:23:40 @registry.py:129] discrim/dis_fc2/fc_diversity output: [50, 100]
[0327 14:23:40 @registry.py:121] discrim/dis_fc3/fc input: [50, 410]
[0327 14:23:40 @registry.py:129] discrim/dis_fc3/fc output: [50, 400]
[0327 14:23:40 @registry.py:121] discrim/dis_fc3/fc_diversity input: [50, 400]
[0327 14:23:40 @registry.py:129] discrim/dis_fc3/fc_diversity output: [50, 100]
 25%|###############4                                             |2527/10000[27:52<1:10:00, 1.78it/s][0327 14:23:40 @registry.py:121] discrim/dis_fc_top input: [50, 410]
[0327 14:23:40 @registry.py:129] discrim/dis_fc_top output: [50, 1]
 25%|###############4                                             |2538/10000[27:58<1:06:06, 1.88it/s][0327 14:23:46 @model_utils.py:64] Trainable Variables: 
name                              shape             dim
--------------------------------  ------------  -------
gen/LSTM/go:0                     [1, 300]          300
gen/LSTM/lstm_cell/kernel:0       [1200, 1600]  1920000
gen/LSTM/lstm_cell/bias:0         [1600]           1600
gen/LSTM/00/FC/W:0                [400, 300]     120000
gen/LSTM/00/FC/b:0                [300]             300
gen/LSTM/00/FC2/W:0               [300, 1]          300
gen/LSTM/00/FC2/b:0               [1]                 1
gen/LSTM/00/attw:0                [1, 1, 1]           1
gen/LSTM/01/FC/W:0                [400, 300]     120000
gen/LSTM/01/FC/b:0                [300]             300
gen/LSTM/01/FC2/W:0               [300, 5]         1500
gen/LSTM/01/FC2/b:0               [5]                 5
gen/LSTM/01/FC3/W:0               [5, 300]         1500
gen/LSTM/01/FC3/b:0               [300]             300
gen/LSTM/01/attw:0                [2, 1, 1]           2
gen/LSTM/02/FC/W:0                [400, 300]     120000
gen/LSTM/02/FC/b:0                [300]             300
gen/LSTM/02/FC2/W:0               [300, 3]          900
gen/LSTM/02/FC2/b:0               [3]                 3
gen/LSTM/02/FC3/W:0               [3, 300]          900
gen/LSTM/02/FC3/b:0               [300]             300
gen/LSTM/02/attw:0                [3, 1, 1]           3
gen/LSTM/03/FC/W:0                [400, 300]     120000
gen/LSTM/03/FC/b:0                [300]             300
gen/LSTM/03/FC2/W:0               [300, 1]          300
gen/LSTM/03/FC2/b:0               [1]                 1
gen/LSTM/03/attw:0                [4, 1, 1]           4
gen/LSTM/04/FC/W:0                [400, 300]     120000
gen/LSTM/04/FC/b:0                [300]             300
gen/LSTM/04/FC2/W:0               [300, 5]         1500
gen/LSTM/04/FC2/b:0               [5]                 5
gen/LSTM/04/FC3/W:0               [5, 300]         1500
gen/LSTM/04/FC3/b:0               [300]             300
gen/LSTM/04/attw:0                [5, 1, 1]           5
gen/LSTM/05/FC/W:0                [400, 300]     120000
gen/LSTM/05/FC/b:0                [300]             300
gen/LSTM/05/FC2/W:0               [300, 1]          300
gen/LSTM/05/FC2/b:0               [1]                 1
gen/LSTM/05/attw:0                [6, 1, 1]           6
gen/LSTM/06/FC/W:0                [400, 300]     120000
gen/LSTM/06/FC/b:0                [300]             300
gen/LSTM/06/FC2/W:0               [300, 5]         1500
gen/LSTM/06/FC2/b:0               [5]                 5
gen/LSTM/06/FC3/W:0               [5, 300]         1500
gen/LSTM/06/FC3/b:0               [300]             300
gen/LSTM/06/attw:0                [7, 1, 1]           7
gen/LSTM/07/FC/W:0                [400, 300]     120000
gen/LSTM/07/FC/b:0                [300]             300
gen/LSTM/07/FC2/W:0               [300, 1]          300
gen/LSTM/07/FC2/b:0               [1]                 1
gen/LSTM/07/attw:0                [8, 1, 1]           8
gen/LSTM/08/FC/W:0                [400, 300]     120000
gen/LSTM/08/FC/b:0                [300]             300
gen/LSTM/08/FC2/W:0               [300, 5]         1500
gen/LSTM/08/FC2/b:0               [5]                 5
gen/LSTM/08/FC3/W:0               [5, 300]         1500
gen/LSTM/08/FC3/b:0               [300]             300
gen/LSTM/08/attw:0                [9, 1, 1]           9
gen/LSTM/09/FC/W:0                [400, 300]     120000
gen/LSTM/09/FC/b:0                [300]             300
gen/LSTM/09/FC2/W:0               [300, 1]          300
gen/LSTM/09/FC2/b:0               [1]                 1
gen/LSTM/09/attw:0                [10, 1, 1]         10
gen/LSTM/10/FC/W:0                [400, 300]     120000
gen/LSTM/10/FC/b:0                [300]             300
gen/LSTM/10/FC2/W:0               [300, 5]         1500
gen/LSTM/10/FC2/b:0               [5]                 5
gen/LSTM/10/FC3/W:0               [5, 300]         1500
gen/LSTM/10/FC3/b:0               [300]             300
gen/LSTM/10/attw:0                [11, 1, 1]         11
discrim/dis_fc0/fc/W:0            [33, 400]       13200
discrim/dis_fc0/fc/b:0            [400]             400
discrim/dis_fc0/fc_diversity/W:0  [400, 100]      40000
discrim/dis_fc0/fc_diversity/b:0  [100]             100
discrim/dis_fc0/bn/beta:0         [410]             410
discrim/dis_fc1/fc/W:0            [410, 400]     164000
discrim/dis_fc1/fc/b:0            [400]             400
discrim/dis_fc1/fc_diversity/W:0  [400, 100]      40000
discrim/dis_fc1/fc_diversity/b:0  [100]             100
discrim/dis_fc1/bn/beta:0         [410]             410
discrim/dis_fc2/fc/W:0            [410, 400]     164000
discrim/dis_fc2/fc/b:0            [400]             400
discrim/dis_fc2/fc_diversity/W:0  [400, 100]      40000
discrim/dis_fc2/fc_diversity/b:0  [100]             100
discrim/dis_fc2/bn/beta:0         [410]             410
discrim/dis_fc3/fc/W:0            [410, 400]     164000
discrim/dis_fc3/fc/b:0            [400]             400
discrim/dis_fc3/fc_diversity/W:0  [400, 100]      40000
discrim/dis_fc3/fc_diversity/b:0  [100]             100
discrim/dis_fc3/bn/beta:0         [410]             410
discrim/dis_fc_top/W:0            [410, 1]          410
discrim/dis_fc_top/b:0            [1]                 1
Total #vars=92, #params=3934650, size=15.01MB
[0327 14:23:46 @base.py:187] Setup callbacks graph ...
[0327 14:23:46 @summary.py:38] Maintain moving average summary of 6 tensors in collection MOVING_SUMMARY_OPS.
[0327 14:23:46 @summary.py:75] Summarizing collection 'summaries' of size 9.
[0327 14:23:46 @graph.py:91] Applying collection UPDATE_OPS of 16 ops.
 25%|###############4                                             |2540/10000[27:59<1:08:34, 1.81it/s][0327 14:23:48 @base.py:205] Creating the session ...


Why this problem occured? Please explain it to me
ManuelAlvarezC commented 5 years ago

Hi @TrinhDinhPhuc,

Would you mind sharing with us some more details like:

Thanks.

harrytrinh2 commented 5 years ago

Hi I am using Ubuntu 18.04, python 3.6, this is my configuration file

[ { "name": "census", "num_random_search": 10, "train_csv": "data/census-train.csv", "continuous_cols": [0, 2, 3, 4, 5], "epoch": 5, "steps_per_epoch": 10000, "output_epoch": 3, "sample_rows": 10000 } ]

I just simply run $python3.6 src/launcher.py demo_config.json as the instruction in README. After 4 hours, it was training epoch 4 but suddenly, it showed these lines:

[0327 14:23:24 @sessinit.py:90] WRN The following variables are in the checkpoint, but not found in the graph: global_step:0, optimize/beta1_power:0, optimize/beta2_power:0
 25%|###############2                                             |2498/10000[27:36<1:08:27, 1.83it/s]2019-03-27 14:23:24.578021: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
[0327 14:23:24 @sessinit.py:117] Restoring checkpoint from train_log/TGAN_synthesizer:KDD-2/model-50000 ...
 25%|###############3                                             |2523/10000[27:50<1:07:24, 1.85it/s][0327 14:23:39 @logger.py:74] Argv: src/TGAN_synthesizer.py --batch_size 50 --z_dim 100 --num_gen_rnn 400 --num_gen_feature 300 --num_dis_layers 4 --num_dis_hidden 400 --learning_rate 0.001 --noise 0.1 --exp_name KDD-3 --max_epoch 5 --steps_per_epoch 10000 --data expdir/KDD/train.npz --gpu 0
 25%|###############3                                             |2524/10000[27:51<1:11:50, 1.73it/s][0327 14:23:39 @develop.py:96] WRN [Deprecated] ModelDescBase._get_inputs() interface will be deprecated after 30 Mar. Use inputs() instead!
[0327 14:23:39 @input_source.py:221] Setting up the queue 'QueueInput/input_queue' for CPU prefetching ...
[0327 14:23:39 @develop.py:96] WRN [Deprecated] ModelDescBase._build_graph() interface will be deprecated after 30 Mar. Use build_graph() instead!
[0327 14:23:39 @registry.py:121] gen/LSTM/00/FC input: [50, 400]
[0327 14:23:39 @registry.py:129] gen/LSTM/00/FC output: [50, 300]
[0327 14:23:39 @registry.py:121] gen/LSTM/00/FC2 input: [50, 300]
[0327 14:23:39 @registry.py:129] gen/LSTM/00/FC2 output: [50, 1]
WARNING:tensorflow:From src/TGAN_synthesizer.py:71: calling softmax (from tensorflow.python.ops.nn_ops) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead

then, it automatically back to epoch 1

discrim/dis_fc_top/W:0            [410, 1]          410
discrim/dis_fc_top/b:0            [1]                 1
Total #vars=92, #params=3934650, size=15.01MB
[0327 14:23:46 @base.py:187] Setup callbacks graph ...
[0327 14:23:46 @summary.py:38] Maintain moving average summary of 6 tensors in collection MOVING_SUMMARY_OPS.
[0327 14:23:46 @summary.py:75] Summarizing collection 'summaries' of size 9.
[0327 14:23:46 @graph.py:91] Applying collection UPDATE_OPS of 16 ops.
 25%|###############4                                             |2540/10000[27:59<1:08:34, 1.81it/s][0327 14:23:48 @base.py:205] Creating the session ...
harrytrinh2 commented 5 years ago

Here is the error:

2019-03-27 23:03:06.579435: W tensorflow/core/kernels/queue_base.cc:277] _0_QueueInput/input_queue: Skipping cancelled enqueue attempt with queue not closed
 60%|########################################################################################################4                                                                    |3018/5000[46:27<25:53, 1.28it/s]Traceback (most recent call last):
  File "src/TGAN_synthesizer.py", line 313, in <module>
    sample(args.sample, Model(), args.load, output_filename=args.output)
  File "src/TGAN_synthesizer.py", line 234, in sample
    session_init=get_model_loader(model_path),
  File "/home/harry/Documents/GANs-demo/TGAN-master/py36_env/lib/python3.6/site-packages/tensorpack/tfutils/sessinit.py", line 262, in get_model_loader
    return SaverRestore(filename)
  File "/home/harry/Documents/GANs-demo/TGAN-master/py36_env/lib/python3.6/site-packages/tensorpack/tfutils/sessinit.py", line 107, in __init__
    model_path = get_checkpoint_path(model_path)
  File "/home/harry/Documents/GANs-demo/TGAN-master/py36_env/lib/python3.6/site-packages/tensorpack/tfutils/varmanip.py", line 182, in get_checkpoint_path
    assert tf.gfile.Exists(model_path) or tf.gfile.Exists(model_path + '.index'), model_path
AssertionError: train_log/TGAN_synthesizer:KDD2-2/model-0

my config file: [ { "name": "KDD2", "num_random_search": 10, "train_csv": "data/KDD2.csv", "continuous_cols": [0, 2, 3, 4, 5], "epoch": 2, "steps_per_epoch": 5000, "output_epoch": 3, "sample_rows": 5000 } ]

ManuelAlvarezC commented 5 years ago

Hi @TrinhDinhPhuc,

Regarding your first question:

Why this problem occured? Please explain it to me

There is no problem, nor the model was retrained, let's see what happened:

  1. According to your first config json:
[
    {
        "name": "census",
        "num_random_search": 10,  # num_random_search: iterations of random hyper parameter search.
        "train_csv": "data/census-train.csv",
        "continuous_cols": [0, 2, 3, 4, 5],
        "epoch": 5,
        "steps_per_epoch": 10000,
        "output_epoch": 3,
        "sample_rows": 10000
    }
]

You are running 10 parallel random searches of model hyperparameters, according to the parameter num_random_search. That is, training and evaluating different model instances with different sets of hyperparameters, to find the best ones for your given dataset. And this message:

[0327 14:22:41 @base.py:264] Training has finished!

Is just that one of the training cycles of the random search has finished, and the following output from when starting the next iteration in the hyperparameter search loop is what I think may have lead you to think that the model was being retrained.

Regarding your second question:

The issue here is that an experiment can't be run twice with the same name.

harrytrinh2 commented 5 years ago

Hi, based on your explanation. I modified my config file like this to check the code: [ { "name": "KDD2", "num_random_search": 1, "train_csv": "data/KDD2.csv", "continuous_cols": [0, 2, 3, 4, 5], "epoch": 1, "steps_per_epoch": 10, "output_epoch": 1, "sample_rows": 5 } ] However, the script was like this, it was not running for a long time, I dont understand why it was Restoring checkpoint and why it took so much time:

[0330 12:43:42 @registry.py:129] discrim/dis_fc_top output: [50, 1]
[0330 12:43:42 @collection.py:145] New collections created in tower : tf.GraphKeys.REGULARIZATION_LOSSES
[0330 12:43:42 @collection.py:164] These collections were modified but restored in : (tf.GraphKeys.SUMMARIES: 0->2)
[0330 12:43:42 @sessinit.py:90] WRN The following variables are in the checkpoint, but not found in the graph: global_step:0, optimize/beta1_power:0, optimize/beta2_power:0
2019-03-30 12:43:42.856889: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
[0330 12:43:43 @sessinit.py:117] Restoring checkpoint from train_log/TGAN_synthesizer:KDD2-0/model-10 ...

I check the code, It was freezed at this line in launcher.py file. Do you know why? pool.map(worker, commands)

harrytrinh2 commented 5 years ago

Screenshot from 2019-03-30 18-57-27 As you can see here, it seems like it was freezing at this line sample(args.sample, Model(), args.load +".index", output_filename=args.output). In the picture, it was Restoring checkpoint from train_log/TGAN_synthesizer:KDD2-0/model-10 ... but in the folder, there was no file model-10 file like that. I think the correct path is model-10.index not model-10 right?

ManuelAlvarezC commented 5 years ago

Hi @TrinhDinhPhuc,

Regarding your first question:

I check the code, It was freezed at this line in launcher.py file. Do you know why?

Seeing your configuration file and the code, there is something that may cause trouble:

[
    {
        ...
        "sample_rows": 5
    }
]

We just found out that there is a bug which prevents TGAN from working properly when sampling a number of rows that is not an exact multiple of the batch_size.

To work around this problem, and since the only possible batch sizes that TGAN can use right now are 50, 100 and 200, please make sure to always request a number of rows to sample that is an exact multiple of 200.

For the second question:

As you can see here, it seems like it was freezing at this line sample(args.sample, Model(), args.load +".index", output_filename=args.output).

Well, I can't see it from your screenshots, but this one of the potential consequences from the bug explained above.

I think the correct path is model-10.index not model-10 right?

Yes, indeed.

Also, I'm not sure if you're aware but any synthesized data will be stored in TGAN-master/exp_dir/KDD2/