cuiziteng / Illumination-Adaptive-Transformer

[BMVC 2022] You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. SOTA for low light enhancement, 0.004 seconds try this for pre-processing.
Apache License 2.0
441 stars 43 forks source link

關於LOL_pretrain.pth #31

Closed e96031413 closed 1 year ago

e96031413 commented 1 year ago

@cuiziteng 您好,之前在没看懂joint training是怎么实现的 #22有看到您對joint training的說明

我在IAT_high/IAT_mmdetection/configs/yolo/yolov3_IAT_lol.py看到LOL_pretrain.pth

pre_encoder = dict(type='IAT', in_dim=3, with_global=True,
                           init_cfg=dict(type='Pretrained', checkpoint='LOL_pretrain.pth'))

想請問pre_encoder的weight (LOL_pretrain.pth),是在IAT_enhance資料夾底下,根據不同的dataset,進行訓練得到的pre-trained weight嗎?

cuiziteng commented 1 year ago

您好,感谢关注。 没错,这个就是在IAT_enhance里面pretrain的权重,是LOL-V2数据集上面训练的权重。

e96031413 commented 1 year ago

了解了,感謝您的回覆