Open Y-T-G opened 1 month ago
Is there a reason why the LR In the configs are different from MI-AOD which is what PPAL paper said it used as reference? The paper said that the MI-AOD config was followed, which is an LR of 0.001 for RetinaNet on both VOC0712 and COCO dataset.
PPAL
MIAOD
For example, for VOC0712, it's 0.002: https://github.com/ChenhongyiYang/PPAL/blob/15875ed7a524675bc6daeba79b3716a0abca2b64/configs/voc_active_learning/al_train/retinanet_26e.py#L22
And for COCO, it's 0.01: https://github.com/ChenhongyiYang/PPAL/blob/15875ed7a524675bc6daeba79b3716a0abca2b64/configs/coco_active_learning/al_train/retinanet_26e.py#L22
Is there a reason why the LR In the configs are different from MI-AOD which is what PPAL paper said it used as reference? The paper said that the MI-AOD config was followed, which is an LR of 0.001 for RetinaNet on both VOC0712 and COCO dataset.
PPAL![image](https://github.com/ChenhongyiYang/PPAL/assets/32206511/390d0c02-3593-4d44-83ce-c5adee4ad8d4)
MIAOD![image](https://github.com/ChenhongyiYang/PPAL/assets/32206511/6d559b5a-f8ad-4829-bad0-6fcbdf17ecc6)
For example, for VOC0712, it's 0.002: https://github.com/ChenhongyiYang/PPAL/blob/15875ed7a524675bc6daeba79b3716a0abca2b64/configs/voc_active_learning/al_train/retinanet_26e.py#L22
And for COCO, it's 0.01: https://github.com/ChenhongyiYang/PPAL/blob/15875ed7a524675bc6daeba79b3716a0abca2b64/configs/coco_active_learning/al_train/retinanet_26e.py#L22