Open kerryhhh opened 2 years ago
Hi, thanks for your interest. The hyperparameter depends on the tradeoff between the quality of concealing and recovering. In the first stage, all hyperparameters can be set to 1 until the network converges. Then, you could finetune the network with different lambda according to the performance. For example, if you prefer a higher reconstruction quality, then set lamda_reconstruction higher.
Thank you for sharing your code. I am trying your code and I do find the loss explosion problem. Do you know the inherent reason of it? Is there any better solution instead of restarting training with lower learning rate every time manually?
Thank you for sharing your code! But I find out that the hyperparameter of loss function(lamda_reconstruction and lamda_low_frequency) in your code is different from the paper, which one I should use?