Zj-BinXia / SSL

This project is the official implementation of 'Structured Sparsity Learning for Efficient Video Super-Resolution', CVPR2023
95 stars 6 forks source link

How was the pre-trained model obtained? #6

Closed zxd-cqu closed 1 year ago

zxd-cqu commented 1 year ago

I debugged the code provided by the official source and found that the pre-trained model is also built using modules like Conv2D_WN_Pre and Conv2D_WN. Can I understand it as follows: Both the pre-trained model and the initially constructed network in the pruning process are derived from the network defined in SSL-master/basicsr/archs/basicvsr_arch.py. If this understanding is correct, how was the pre-trained model trained? What was the loss function, and was the training applied to both scaling factors and convolutional weights in the same way? Is it trained directly, like in original basicvsr, without any additional operations?

Zj-BinXia commented 1 year ago

You don't have to think too complicatedly. The training method is based on Basicsr, not on pruning.

zxd-cqu commented 1 year ago

So the pre-trained model do including Conv2D_WN_Pre and Conv2D_WN modules along with trainable scaling factors. I can train it just directly using the methods provided in basicvsr.

Zj-BinXia commented 1 year ago

Yes

zxd-cqu commented 1 year ago

Got it. Thank you for your response.

sunyclj commented 11 months ago

So the pre-trained model do including Conv2D_WN_Pre and Conv2D_WN modules along with trainable scaling factors. I can train it just directly using the methods provided in basicvsr.

if the pre-trained model is not included Conv2D_WN_Pre and Conv2D_WN with trainable scaling factors, can't it?