xinntao / EDVR

Winning Solution in NTIRE19 Challenges on Video Restoration and Enhancement (CVPR19 Workshops) - Video Restoration with Enhanced Deformable Convolutional Networks. EDVR has been merged into BasicSR and this repo is a mirror of BasicSR.
https://github.com/xinntao/BasicSR
1.49k stars 318 forks source link

Offset mean is larger than 100 in PCD Align Module' DCNV2.Could you give me some advice to minimize it? #16

Open huihuiustc opened 5 years ago

DLwbm123 commented 5 years ago

Same to me.

xinntao commented 5 years ago

Yes, indeed we also found the training with DCN is unstable. We will write down the issues we met during the competition in this repo later. And unstable training is one of them. There are still a lot of things that we can improve on EDVR and we are also exploring some of them.

During the competition, we trained the large model from smaller ones and used a smaller learning rate for dcn. Even with these tricks, the over-large offsets are occasionally met. And we just resumed it from a normal checkpoint if we met.

huihuiustc commented 5 years ago

What dou mean that you trained the large model from smaller ones.Or this one:"We initialize deeper networks by parameters from shallower ones for faster convergence"in your paper.

For instance. We use kaiming_normal initialize all parameter,then freeze TSA and Reconstruction Module,only request_grad in the PCD align and PreDeblur Module.

Thanks for your attention.

xinntao commented 5 years ago
  1. Yes, we first train shallower ones.
  2. We will release some models and also the training codes to train from scratch. But their performances are not as good as the competition models.
huihuiustc commented 5 years ago

谢谢大佬的回复,确实是很厉害的工作和研究。

我们正在尝试先把可变卷积换成正常的卷积,然后训练得到的初始model,然后用这个模型训练网络。 接着冻结部分模型块再开始训练。

xinntao commented 5 years ago

Actually, DCN is relatively important. So you can first train a small network with DCN (w/o TSA). We are running these experiments and will release it as soon as possible.

splinter21 commented 5 years ago

1、“We trained the large model from smaller ones and used a smaller learning rate for dcn.” Do you mean this(for example): step 1>5front-10back with DCN+TSA,lr=1e-4,(model S(hallow)). step 2>5front-40back with DCN+TSA,lr(DCN)=5e-5(e.g.),lr_others=1e-4. And parameters of S except 30 back blocks is copied to model D(eep).

2、"You can first train a small network with DCN (w/o TSA)" Do you mean, only DCN is needed to be pretrained, another paramters after DCN is not needed(not useful for deeper model). For example, I can train 5front blocks with DCN, w/o TSA, and with very shallow SR network after DCN. Then, the DCN is pretrained, paramters after DCN can be abandoned, and I can change SR network whatever I like after DCN?

3、This pretrained-DCN-trick can't make the final model D with a deeper or wider(I mean, change the feature extraction layers before DCN) DCN module compared with model S, because DCN paramters are needed to be copied. Is it right?

4、For the second step, there are two choices for DCN. The first one, smaller lr for DCN. The second one, freeze DCN module. The second choice can save many time and GPU memory for training. Is it suitable? @xinntao

xinntao commented 5 years ago

We have updated the training codes and configs. We provide training scripts for the model with Channel=128, Back RB=10. The learning rate scheme is different from that in the competition. But it is more effective.

1) train with the script train_EDVR_woTSA_M.yml 2) then train with the script train_EDVR_M.yml

You can try this.

tongjuntx commented 4 years ago

谢谢大佬的回复,确实是很厉害的工作和研究。

我们正在尝试先把可变卷积换成正常的卷积,然后训练得到的初始model,然后用这个模型训练网络。 接着冻结部分模型块再开始训练。

have you succeed?how about the effect?