ddlee-cn / MuLUT

[ECCV 2022 & T-PAMI 2024] Multiple Look-Up Tables for Efficient Image Restoration
https://mulut.pages.dev
MIT License
90 stars 9 forks source link

about the lut #2

Closed mrgreen3325 closed 1 year ago

mrgreen3325 commented 1 year ago

Hi. Thanks for your work. May I know that your training model is in float32, but how the lut is in int8 format? Thanks.

ddlee-cn commented 1 year ago

The network outputs are clamped and quantized to [-127, 127] in the step 2, i.e., transferring network to LUT (Line here).

To replicate the performance of float-type network with int-type LUTs, we introduce LUT re-indexing and LUT-aware finetune, as described in our paper.

Thanks for your interest and feel free to comment below if you have further questions.

mrgreen3325 commented 1 year ago

Hello Thanks for reply. I notice that your input in step 2 are int, is that the reason why you can clamped and quant the output to [-127,127]? I am looking into the https://github.com/zhjy2016/SPLUT/blob/main/training_testing_code/Transfer_SPLUT_M.py It seems that the transfer is very different to yours, do you think this method can also output int format?

ddlee-cn commented 1 year ago

The inputs of SRNet (4D int combinations or pixels) are converted to [0, 1] by dividing 255 (line here). The outputs of SRNet are scaled to [-127, 127] by muliplying 127. This way, the inputs and outputs of both SRNet and MuLUT are integers, while the tensors and gradients inside SRNet are float-point numbers.

As for SPLUT, they split 8bit integers into higher bits and lower bits, and then process them with a parallel structure.

Hope it helps. :)