-
-
How do you benchmark a pre-trained model with a custom number of residual blocks and feature maps? If I train a model with say 16 residual blocks and 16 feature maps, how would I then run it on the be…
-
Hi, Sorry for bothering!
If I conduct some transform to the input data, how can I change the number of input channel from 3 to 12?
And I need to conduct inverse transform, so the 12-channel output c…
-
How can I see the activation functions used in the trained model?
i.e. If I load the model using:
`model = torch.load('../models/edsr_baseline_x4-6b446fab.pt', map_location='cpu')`
then I can see t…
-
hi jiahui. I want to reproduce your WDSRA x2 result 34.541 dB. I used EDSR-Pytorch framework to train and set --patch_size 96 --n_resblocks 8 --n_feats 32 --block_feats 128 --res_scale 1 .
All re…
Zysty updated
5 years ago
-
Hi, when I trained with the CharbonnierLoss , the loss is very very big, but when I trained with L1 loss, it is normal, what caused this phenomenon, could you give me some advice?
-
Hi,
I tried the latest version of Edsr-pytorch with an argument named '--chop_forward' but it complains no such argument exists. Is it still there or name changed to something?
Thanks!
-
Making model...
Preparing loss function:
1.000 * L1
[Epoch 1] Learning rate: 1.00e-4
Making model...
Preparing loss function:
1.000 * L1
[Epoch 1] Learning rate: 1.00e-4
Tracebac…
-
Hi,
From the paper,I noticed the sentence “ We also observe that fine-tuning on a network pretrained on the BI degradation model leads to higher PSNR values than training from scratch. ”Could you tell…
lkyee updated
5 years ago
-
appreciate your excellent work..... I'm training WDSR model on DIV2K_x3 training set ,now the global step is about 2650000: PSNR = 31.0282, global_step = 2650000, loss = 0.01833889. it is increasing …