taki0112 / UGATIT

Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)
MIT License
6.17k stars 1.04k forks source link

training on 128px images #84

Open DavidPaw opened 4 years ago

DavidPaw commented 4 years ago

I am trying to train the model on another dataset, due to the limitation of my gpu memory, I set the image size to 128px and all the other params is same as default (with --light = False). After more than 100 epochs, I found the g_loss went down slowly while the d_loss almost no change, here is part of my training logs: Epoch: [ 23] [ 999/10000] time: 16906.9957 d_loss: 3.08590425, g_loss: 73.51989753 Epoch: [ 23] [ 1999/10000] time: 17201.6740 d_loss: 3.07735086, g_loss: 76.30607847 Epoch: [ 23] [ 2999/10000] time: 17496.4938 d_loss: 3.12576462, g_loss: 51.78772616 Epoch: [ 23] [ 3999/10000] time: 17791.2827 d_loss: 3.07284615, g_loss: 75.50018718 Epoch: [ 23] [ 4999/10000] time: 18086.2312 d_loss: 3.08184701, g_loss: 65.71625671 Epoch: [ 23] [ 5999/10000] time: 18389.5246 d_loss: 3.11587416, g_loss: 47.44926098 Epoch: [ 23] [ 6999/10000] time: 18684.1930 d_loss: 3.04495883, g_loss: 59.90846872 Epoch: [ 23] [ 7999/10000] time: 18978.9267 d_loss: 3.13069185, g_loss: 70.67539809 Epoch: [ 23] [ 8999/10000] time: 19273.7113 d_loss: 3.07864166, g_loss: 61.32302952 Epoch: [ 23] [ 9999/10000] time: 19568.2578 d_loss: 3.09075990, g_loss: 91.65463501 Epoch: [ 24] [ 999/10000] time: 19915.8543 d_loss: 3.06492620, g_loss: 45.62110547 Epoch: [ 24] [ 1999/10000] time: 20210.6648 d_loss: 3.07718702, g_loss: 99.52553242 Epoch: [ 24] [ 2999/10000] time: 20505.5909 d_loss: 3.06712106, g_loss: 49.65786957 Epoch: [ 24] [ 3999/10000] time: 20800.2968 d_loss: 3.07929607, g_loss: 63.85243637 Epoch: [ 24] [ 4999/10000] time: 21094.9015 d_loss: 3.09633374, g_loss: 68.58105292 Epoch: [ 24] [ 5999/10000] time: 21396.3598 d_loss: 3.10153692, g_loss: 61.12775206 Epoch: [ 24] [ 6999/10000] time: 21691.1883 d_loss: 3.09252290, g_loss: 55.74537253 Epoch: [ 24] [ 7999/10000] time: 21985.6966 d_loss: 3.07935773, g_loss: 42.89120544 Epoch: [ 24] [ 8999/10000] time: 22280.4273 d_loss: 3.02845049, g_loss: 101.97019059 Epoch: [ 24] [ 9999/10000] time: 22575.3636 d_loss: 3.07671818, g_loss: 89.45536136 Epoch: [ 25] [ 999/10000] time: 22923.1625 d_loss: 3.05323941, g_loss: 69.47784813 Epoch: [ 25] [ 1999/10000] time: 23218.2782 d_loss: 3.08390261, g_loss: 60.91669004 Epoch: [ 25] [ 2999/10000] time: 23512.6439 d_loss: 3.06433159, g_loss: 48.60272773 Epoch: [ 25] [ 3999/10000] time: 23807.3268 d_loss: 3.10401783, g_loss: 89.96760877 Epoch: [ 25] [ 4999/10000] time: 24102.2829 d_loss: 3.06777660, g_loss: 74.95750990 Epoch: [ 25] [ 5999/10000] time: 24406.8876 d_loss: 3.01425153, g_loss: 51.37549698 Epoch: [ 25] [ 6999/10000] time: 24701.6992 d_loss: 3.06115526, g_loss: 55.64374349 Epoch: [ 25] [ 7999/10000] time: 24996.6479 d_loss: 3.04264698, g_loss: 107.05923116 Epoch: [ 25] [ 8999/10000] time: 25291.2565 d_loss: 3.05638903, g_loss: 45.66431950 Epoch: [ 25] [ 9999/10000] time: 25586.2944 d_loss: 3.05310924, g_loss: 57.12808654 ... ... Epoch: [108] [ 999/10000] time: 274900.4027 d_loss: 2.75732487, g_loss: 15.71750626 Epoch: [108] [ 1999/10000] time: 275201.7551 d_loss: 2.77189661, g_loss: 18.03043892 Epoch: [108] [ 2999/10000] time: 275502.8889 d_loss: 2.74505283, g_loss: 29.73778408 Epoch: [108] [ 3999/10000] time: 275804.0007 d_loss: 2.74047477, g_loss: 13.46787546 Epoch: [108] [ 4999/10000] time: 276105.3889 d_loss: 2.75409283, g_loss: 13.94690639 Epoch: [108] [ 5999/10000] time: 276413.3031 d_loss: 2.73639352, g_loss: 15.85196716 Epoch: [108] [ 6999/10000] time: 276714.8494 d_loss: 2.74303201, g_loss: 13.01651404 Epoch: [108] [ 7999/10000] time: 277016.3802 d_loss: 2.70403919, g_loss: 19.72308408 Epoch: [108] [ 8999/10000] time: 277317.7888 d_loss: 2.75544275, g_loss: 20.05573119 Epoch: [108] [ 9999/10000] time: 277618.7255 d_loss: 2.74763519, g_loss: 27.60796359 Epoch: [109] [ 999/10000] time: 277973.4322 d_loss: 2.74708301, g_loss: 15.04032384 Epoch: [109] [ 1999/10000] time: 278274.3401 d_loss: 2.76856373, g_loss: 13.25339388 Epoch: [109] [ 2999/10000] time: 278575.9948 d_loss: 2.77913607, g_loss: 20.37707386 Epoch: [109] [ 3999/10000] time: 278876.7019 d_loss: 2.73852636, g_loss: 26.90761091 Epoch: [109] [ 4999/10000] time: 279177.7898 d_loss: 2.71317122, g_loss: 14.07005649 Epoch: [109] [ 5999/10000] time: 279488.2022 d_loss: 2.74429889, g_loss: 16.16309347 Epoch: [109] [ 6999/10000] time: 279789.5086 d_loss: 2.72074990, g_loss: 49.40637661 Epoch: [109] [ 7999/10000] time: 280091.3858 d_loss: 2.71115365, g_loss: 14.88366779 Epoch: [109] [ 8999/10000] time: 280392.8122 d_loss: 2.80026801, g_loss: 15.08182055 Epoch: [109] [ 9999/10000] time: 280693.8832 d_loss: 2.76265522, g_loss: 15.97350811 Epoch: [110] [ 999/10000] time: 281052.0501 d_loss: 2.72147970, g_loss: 13.03669257 Epoch: [110] [ 1999/10000] time: 281353.3118 d_loss: 2.77001529, g_loss: 14.07980553 Epoch: [110] [ 2999/10000] time: 281654.4276 d_loss: 2.73421922, g_loss: 15.32935848 Epoch: [110] [ 3999/10000] time: 281956.0588 d_loss: 2.76495708, g_loss: 13.15643111 Epoch: [110] [ 4999/10000] time: 282257.0708 d_loss: 2.78270527, g_loss: 15.62895326 Epoch: [110] [ 5999/10000] time: 282566.4323 d_loss: 2.74962019, g_loss: 13.11755330 Epoch: [110] [ 6999/10000] time: 282867.1354 d_loss: 2.76405218, g_loss: 15.35785124 Epoch: [110] [ 7999/10000] time: 283168.7494 d_loss: 2.74026593, g_loss: 20.30538943 Epoch: [110] [ 8999/10000] time: 283470.3044 d_loss: 2.74255289, g_loss: 16.45972105 Epoch: [110] [ 9999/10000] time: 283771.6103 d_loss: 2.71626760, g_loss: 18.04276052 Epoch: [111] [ 999/10000] time: 284126.6149 d_loss: 2.71018639, g_loss: 14.03714235 Epoch: [111] [ 1999/10000] time: 284428.2454 d_loss: 2.70579766, g_loss: 25.90458973 Epoch: [111] [ 2999/10000] time: 284729.5248 d_loss: 2.73066261, g_loss: 28.13376340 Epoch: [111] [ 3999/10000] time: 285030.7362 d_loss: 2.74041314, g_loss: 31.78455309 Epoch: [111] [ 4999/10000] time: 285332.2747 d_loss: 2.70574017, g_loss: 17.19166613 Epoch: [111] [ 5999/10000] time: 285643.0835 d_loss: 2.72995417, g_loss: 13.59579601

I wonder if there is sth wrong with the discriminator, is there anyone can give me some advice? Thanks a lot.

DavidPaw commented 4 years ago

I found sth doubtful in discriminator_global(), maybe the times of downsampling is too much for smaller input image, I have switch the n_dis less, hope it will work.

tankfly2014 commented 4 years ago

what's result?

gdwei commented 3 years ago

Similar problem. The generator loss go downs from a thousand to around 50, but the discriminator loss keep jittering around some values. Anyone gets an idea on how to solve it? Or what is the reason for such a result?