Your work is very interesting, and I have two questions regarding the experiments in your paper:
In the paper, you compared the approach of using image enhancement followed by object detection (such as HE, GLADNet, Retinex-Net, EnlightenGAN, Zero-DCE, SID, and REDI). Were these methods using the checkpoints provided by their original papers, or were they retrained on the noisy COCO dataset or trained from scratch?
The Enhance + Denoise and Integrated Enhance + Denoise methods use enhancement networks for RGB images and RAW images(HE, GLADNet, Retinex-Net, EnlightenGAN, Zero-DCE for RGB&SID, and REDI for RAW), respectively. During inference, are the enhancement networks still conducted on images in the RGB domain and RAW domain, respectively?
Your work is very interesting, and I have two questions regarding the experiments in your paper:
In the paper, you compared the approach of using image enhancement followed by object detection (such as HE, GLADNet, Retinex-Net, EnlightenGAN, Zero-DCE, SID, and REDI). Were these methods using the checkpoints provided by their original papers, or were they retrained on the noisy COCO dataset or trained from scratch?
The Enhance + Denoise and Integrated Enhance + Denoise methods use enhancement networks for RGB images and RAW images(HE, GLADNet, Retinex-Net, EnlightenGAN, Zero-DCE for RGB&SID, and REDI for RAW), respectively. During inference, are the enhancement networks still conducted on images in the RGB domain and RAW domain, respectively?
Looking forward to your reply.