jsyoon0823 / GAIN

Codebase for Generative Adversarial Imputation Networks (GAIN) - ICML 2018
365 stars 152 forks source link

The test loss didn't converge while running GAIN_Letter.py #8

Closed MichaelWei7 closed 4 years ago

MichaelWei7 commented 5 years ago

Hi! I read your paper this week. I ran GAIN_Letter.py and recorded train and test loss and plot all values. But the test loss didn't converge. Untitled I didn't edit any code but iteration times. There was just one warning message. So why the test loss value didn't converge?

Here are the outputs: `WARNING:tensorflow:From C:\Users\Michael\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. 2019-05-11 23:51:08.686711: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

0%| | 0/500000 [00:00<?, ?it/s]Iter: 0 Train_loss: 0.2544 Test_loss: 0.2435

0%| | 1/500000 [00:00<37:37:01, 3.69it/s] 0%| | 25/500000 [00:00<26:30:21, 5.24it/s] 0%| | 51/500000 [00:00<18:43:05, 7.42it/s] 0%| | 77/500000 [00:00<13:16:01, 10.47it/s]Iter: 100 Train_loss: 0.1807 Test_loss: 0.183

0%| | 103/500000 [00:00<9:27:10, 14.69it/s] 0%| | 129/500000 [00:00<6:46:53, 20.47it/s] 0%| | 155/500000 [00:00<4:54:36, 28.28it/s] 0%| | 182/500000 [00:00<3:35:50, 38.60it/s]Iter: 200 Train_loss: 0.1521 Test_loss: 0.1615

0%| | 208/500000 [00:01<2:41:04, 51.72it/s] 0%| | 234/500000 [00:01<2:02:26, 68.03it/s] 0%| | 261/500000 [00:01<1:35:19, 87.37it/s] 0%| | 287/500000 [00:01<1:16:25, 108.98it/s]Iter: 300 Train_loss: 0.1367 Test_loss: 0.1388

0%| | 313/500000 [00:01<1:03:40, 130.79it/s] 0%| | 340/500000 [00:01<54:11, 153.68it/s]
0%| | 366/500000 [00:01<47:37, 174.84it/s] 0%| | 393/500000 [00:01<42:57, 193.85it/s]Iter: 400 Train_loss: 0.1231 Test_loss: 0.1307 ... `

jsyoon0823 commented 4 years ago

I think there is a problem with GAN training. Please adjust the hyper-parameters to make it converge. Large alpha may help for better converging. GAN training itself is difficult.