cuiziteng / Illumination-Adaptive-Transformer

[BMVC 2022] You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. SOTA for low light enhancement, 0.004 seconds try this for pre-processing.
Apache License 2.0
441 stars 43 forks source link

您好,请问一下怎样将IAT和yolo联合训练呢? #37

Closed blackpcl closed 1 year ago

cuiziteng commented 1 year ago

感谢关注,请参考论文Sec.8的部分,代码实现就是joint-training,把IAT做为一个pre-encoder前置就好,参考https://github.com/cuiziteng/Illumination-Adaptive-Transformer/blob/main/IAT_high/IAT_mmdetection/mmdet/models/detectors/IAT_detector/IAT_yolo.py

blackpcl commented 1 year ago

谢谢您的回答,明白了。

beyond96 commented 9 months ago

感谢关注,请参考论文Sec.8的部分,代码实现就是joint-training,把IAT做为一个pre-encoder前置就好,参考https://github.com/cuiziteng/Illumination-Adaptive-Transformer/blob/main/IAT_high/IAT_mmdetection/mmdet/models/detectors/IAT_detector/IAT_yolo.py

很棒的工作!有个问题请教您一下,如果把IAT做为一个pre-encoder前置,那我联合训练的时候同样也要读取训练IAT部分对应的LOL数据集吗?还是说我只需要读取exdark这个检测数据集就可以了,只通过检测的结果计算loss

感谢关注,请参考论文Sec.8的部分,代码实现就是joint-training,把IAT做为一个pre-encoder前置就好,参考https://github.com/cuiziteng/Illumination-Adaptive-Transformer/blob/main/IAT_high/IAT_mmdetection/mmdet/models/detectors/IAT_detector/IAT_yolo.py