yhjo09 / SR-LUT

173 stars 27 forks source link

Why the output of SRLUT is clipped to [-1,1]? #12

Closed Harr7y closed 1 year ago

Harr7y commented 1 year ago

In Train_Model_S.py Line 182-184, each output is clip to [-1, 1], then the batch_S is projected to [-2, 2], but the batch_H belongs to [0,1].

Is this right to compute the L1 loss on [-2, 2] and [0, 1] ?

yhjo09 commented 1 year ago

Hi, You can clip each output value in [0, 1] and sum them all to generate the final values in [0, 1]. However, generating negative values is necessary to train a highly expressive model with improved PSNR and visual quality. The possible value range of batch_S is [-2, 2] but it is adjusted to the target range [0, 1] during training without any tricks.

Harr7y commented 1 year ago

Thanks for the reply.