Closed ToTheBeginning closed 4 years ago
Thanks for your kind words. In that paper, we wrote a color conversion layer that is differentiable. You can refer to https://github.com/smartcameras/EdgeFool/blob/master/Train/rgb_lab_formulation_pytorch.py, which does the same job. Please note that we do not exactly use the same code and cannot ensure the correctness of this implementation. If you encounter any further question, please let us know.
Thank you for your help. After several hours of observation, the code you posted could work normally in my experiment! If there are new discoveries, I will reopen this issue and synchronize them here.
Thanks again.
Nice work and thanks for sharing the code!
I noticed that the cvpr2019 "Deep Exemplar-based Video Colorization" is also a work from your team, the two papers share similar Perceptual Loss, in this work, all the images sent to VGG are in RGB space, however in that work the generated images are in CIELAB space, so how to send it to a pre-trained VGG in RGB space?
Did you retrain VGG in CIELAB space or use other methods to solve this color space inconsistency problem? Can you share your solution? That will be very helpful!
Thanks first.