Restorers provide out-of-the-box TensorFlow implementations of SoTA image and video restoration models for tasks such as low-light enhancement, denoising, deblurring, super-resolution, etc.
In the original paper (https://arxiv.org/abs/2204.04676) the authors find the optimal number of blocks to be 36. It has the minimal increase in latency and great increase in performance. So we need to train NAFNet with that setting.
In the initial tests we see that NAFNet is able to beat MirnetV2 even with just 9 blocks (On LOL Dataset). So there is reason to believe a distilled NAFNet with 9 blocks trained from the teacher model with 36 blocks can give excellent performance. This needs to be tested.
In the original paper (https://arxiv.org/abs/2204.04676) the authors find the optimal number of blocks to be 36. It has the minimal increase in latency and great increase in performance. So we need to train NAFNet with that setting. In the initial tests we see that NAFNet is able to beat MirnetV2 even with just 9 blocks (On LOL Dataset). So there is reason to believe a distilled NAFNet with 9 blocks trained from the teacher model with 36 blocks can give excellent performance. This needs to be tested.