aiff22 / DPED

Software and pre-trained models for automatic photo quality enhancement using Deep Convolutional Networks
1.68k stars 369 forks source link

Hyper parameters of pre-trained model #17

Closed ved27 closed 5 years ago

ved27 commented 5 years ago

Hi,

I wish to know the hyper-parameter settings of the pre-trained model provided. I used the default parameters and trained the network, the output quality does not match with the output generated using pre-trained model.

I've tried the following coefficients of loss function and trained the network: Content:10.0 , tv:2000 , texture: 1.0, color:0.5 as well as content: 1.0, tv : 400 , texture: 0.4 , color: 0.1 However these settings produce output that is degraded compared to that of pre-trained model.

Moreover, the paper suggests to pre-train the discriminator. But, the code provided does not use pre-trained discriminator.

Please help with the above 2 queries.

Awaiting your help,

aiff22 commented 5 years ago

@ved27, the code in this repository contains the same hyper-parameters that were used to pre-train the provided models. Please note that the resulting model is always slightly different due to the texture loss: since you are training the discriminator from scratch, during each run you are getting a new model, and thus the texture loss used for generator training is also different in each run.

Regarding discriminator pre-training - you can remove the texture (discriminator) loss from the generator's total loss for the first 1K-2K iterations, while still training them simultaneously.