Open hzg456 opened 3 years ago
The pro is that you can fine-tine two networks separately, however, in the source code, no different hyper-parameters are applied. Therefore, I think respective training is not a must. If I am wrong, please correct me.
Why should FNet,and SRNet be trained respectively(I mean using two Optimizers)? because the function
tfa.image.dense_image_warp
cannot be used for gradient back propagation?