cuiziteng / Illumination-Adaptive-Transformer

🌕 [BMVC 2022] You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. SOTA for low light enhancement, 0.004 seconds try this for pre-processing.
Apache License 2.0
459 stars 43 forks source link

test #52

Closed PeterSmith1 closed 6 months ago

PeterSmith1 commented 1 year ago

大佬您好,我训练完IAT+YOLOV3后,在测试过程中,我发现测试的图片并没有做IAT的增强,只是做了检测,如果想把测试图片的增强可视化,需要怎么操作呢?

cuiziteng commented 1 year ago

可以在inference期间把中间过程的图像保存一下

WWJ0720 commented 1 year ago

可以在inference期间把中间过程的图像保存一下

大佬您好,在test过程中直接调用result = model(return_loss=False, rescale=True, **data),要怎么获得模型的中间结果(IAT处理后的结果)

cuiziteng commented 1 year ago

不是大佬,您好,

https://github.com/cuiziteng/Illumination-Adaptive-Transformer/blob/102b6fe997c9babc2f8053654b8c1b3304a36581/IAT_high/IAT_mmdetection/mmdet/models/detectors/IAT_detector/IAT_yolo.py#L26C44-L26C44

其实就是保存第26行话的图

WWJ0720 commented 1 year ago

不是大佬,您好,

https://github.com/cuiziteng/Illumination-Adaptive-Transformer/blob/102b6fe997c9babc2f8053654b8c1b3304a36581/IAT_high/IAT_mmdetection/mmdet/models/detectors/IAT_detector/IAT_yolo.py#L26C44-L26C44

其实就是保存第26行话的图

感谢回答!!请问有遇到过这个问题吗,我在运行test.py和train.py的过程中,都无法在forward( )或者其他方法打断点或者进行print、保存等操作,只能在init打断点才有效,所以无法保存这段的图,请问有解决的思路吗,非常感谢!!