Closed gauenk closed 1 year ago
Thank you for your awesome code!
I am hoping you might open-source the log files you have from training. Maybe the training and validation loss as a function of epoch (and/or batch) with an estimate of the runtime?
Making model... Loading model from experiment/ridnet.pt Load Model from epoch: 27 Epoch 28:g_trn_l=4260.0166,d_trn_l=-8.6347: 100%|█| 2500/2500 [18:04<00:00, 2.3 val_d_loss=-16553.590091705322, val_g_loss=1456043.0610351562: 100%|█| 250/250 [ Epoch 29: val_d_loss=-8.276795045852662, val_g_loss=728.0215305175781 Epoch 29:g_trn_l=612.9119,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:27<00:00, 2.39 val_d_loss=-16508.084072113037, val_g_loss=1441245.7890625: 100%|█| 250/250 [00: Epoch 30: val_d_loss=-8.254042036056518, val_g_loss=720.62289453125 Epoch 30:g_trn_l=569.982,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:29<00:00, 2.38i val_d_loss=-16534.407222747803, val_g_loss=1445840.658203125: 100%|█| 250/250 [0 Epoch 31: val_d_loss=-8.267203611373901, val_g_loss=722.9203291015625 Epoch 31:g_trn_l=642.1012,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:27<00:00, 2.39 val_d_loss=-16530.938148498535, val_g_loss=1451748.1240234375: 100%|█| 250/250 [ Epoch 32: val_d_loss=-8.265469074249268, val_g_loss=725.8740620117187 Epoch 32:g_trn_l=1922.5544,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:29<00:00, 2.3 val_d_loss=-16461.000728607178, val_g_loss=1447958.69140625: 100%|█| 250/250 [00 Epoch 33: val_d_loss=-8.23050036430359, val_g_loss=723.979345703125 Epoch 33:g_trn_l=596.3593,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:28<00:00, 2.38 val_d_loss=-16396.410007476807, val_g_loss=1444418.5522460938: 100%|█| 250/250 [ Epoch 34: val_d_loss=-8.198205003738403, val_g_loss=722.2092761230468 Epoch 34:g_trn_l=4094.3552,d_trn_l=-8.6347: 100%|█| 2500/2500 [17:33<00:00, 2.3 val_d_loss=-16325.490299224854, val_g_loss=1430286.7797851562: 100%|█| 250/250 [ Epoch 35: val_d_loss=-8.162745149612427, val_g_loss=715.1433898925782
Thank you for your awesome code!
I am hoping you might open-source the log files you have from training. Maybe the training and validation loss as a function of epoch (and/or batch) with an estimate of the runtime?