Open MichaelD7911 opened 2 weeks ago
No, I kept the default COCO dataset settings, as SARDet-100K shares many similarities with COCO. We found that the AdamW optimizer outperforms SGD, so we used AdamW. After experimenting with different learning rates, we found that 0.0001 is stable and provides good performance.
I've looked through
schedule_1x.py
for learning rate schedule.It looks very similar to on of MMengine default suggestions (https://mmengine.readthedocs.io/en/latest/tutorials/param_scheduler.html):
Did you train to change it ? Does it obtains best results, from what you tried ?