Closes #12
Loss functions can now be completely configured via res/configs/loss_functions.yaml.
This cleans up the code a bit and offers users a simpler way to manipulate calculations.
Furthermore now it is possible to avoid using the scaling and correction used in the paper.
Makes things a bit more flexible (e.g. SwinIR is trained on L1 for classical SR-task -> it's probably better to also use it in our case)
I know that IDEs will give a "Expected type Metric but got float" warning when scaling the metric but I've opened a PR for that.
Closes #12 Loss functions can now be completely configured via
res/configs/loss_functions.yaml
. This cleans up the code a bit and offers users a simpler way to manipulate calculations. Furthermore now it is possible to avoid using the scaling and correction used in the paper. Makes things a bit more flexible (e.g.SwinIR
is trained onL1
for classical SR-task -> it's probably better to also use it in our case)I know that IDEs will give a "Expected type
Metric
but gotfloat
" warning when scaling the metric but I've opened a PR for that.