Closed babaozhouy5 closed 6 years ago
Yes, I also experience that a nice values for tv_weigth
are about 1e-6 or 1e-7, almost at the point where you lose accuracy due to numerical instabilities. So one way could be to multiply other than TV loss terms by a factor of, say, 100 while dividing learning rate by 100 at the same time.
Notice, that you do not apply square root in your code, to get a vanilla TV loss you should. You TV loss will behave differently to the one with square root. See figure 2 of [1] for example.
[1] Understanding Deep Image Representations by Inverting Them, Aravindh Mahendran, Andrea Vedaldi
Thanks for apply :). Yes, i will try this method. The paper looks very good 👍 , I should carefully read it.
In the Super-resolution section, there is a tv_weight&tv_loss, but when i change tv_weight to a small value (default is 0.0), like 0.1/0.01/0.001, i got a -inf PSNR value for LR&HR images. It seems that only tv_weight < 1e-3, eg. tv_weight=1e-4, can get a proper result.
So I change the tv_loss function like this:
(Thanks for TV Loss) It seems works well, not sensitive. Any suggestions?