Nice work!
I have a question about the model size. In your provided trained model, the size is pretty small (e.g., around 14 MB). However, the output after running your train.py is far larger (e.g., 126M). Just wonder did you conduct any post-processing to the model? or is there any parameters in your code to tune the model size? (any trade-off for such a compression?)
[Released model]
69 May 22 16:15 checkpoint*
14M May 22 16:15 snap-0.data-00000-of-00001
3.6K May 22 16:15 snap-0.index
14M May 22 16:15 snap-0.meta*
[output of train.py]
77 May 23 15:50 checkpoint
84M May 23 20:01 events.out.tfevents.1527040937.dgx1-server2
126M May 23 15:50 snap-10000.data-00000-of-00001
12K May 23 15:50 snap-10000.index
14M May 23 15:50 snap-10000.meta
The released places2 model is only the generator model (celeba-hq has both generator and discriminator). The model size for your checkpoint is large is mainly because the weights for discriminators are included.
Hi Jiahui,
Nice work! I have a question about the model size. In your provided trained model, the size is pretty small (e.g., around 14 MB). However, the output after running your train.py is far larger (e.g., 126M). Just wonder did you conduct any post-processing to the model? or is there any parameters in your code to tune the model size? (any trade-off for such a compression?)
[Released model]
14M May 22 16:15 snap-0.data-00000-of-00001 3.6K May 22 16:15 snap-0.index 14M May 22 16:15 snap-0.meta*
[output of train.py]
77 May 23 15:50 checkpoint 84M May 23 20:01 events.out.tfevents.1527040937.dgx1-server2 126M May 23 15:50 snap-10000.data-00000-of-00001 12K May 23 15:50 snap-10000.index 14M May 23 15:50 snap-10000.meta