xuannianz / EfficientDet

EfficientDet (Scalable and Efficient Object Detection) implementation in Keras and Tensorflow
Apache License 2.0
1.38k stars 395 forks source link

mAP not increasing as expected on a custom dataset #111

Closed DujeMedak closed 4 years ago

DujeMedak commented 4 years ago

First of all thank you for this amazing work @xuannianz .

I used this repository to train d0, d1 and d2 models on my custom dataset a month ago. I got very good results when training with the following hyperparameters (command): python3 train.py --freeze-backbone --random-transform --tensorboard-dir {path_here} --snapshot-path {path_here} --phi {phi_here} --compute-val-loss --snapshot {path_here} --batch-size 8 --steps 500 --epochs 20 csv {path_here} {path_here} --val-annotations {path_here}

Now I got extended version of that dataset (looks very similar to the one I already tested one month ago) but can't seem to train a model using this dataset. MAP stays very low throughout the first few epoch ( < 0.01) and even after 20 epoch the results aren't good (I get around 0.3 in the best cases and it should be at least 0.75 since the dataset is easy and other detectors (YOLO, SSD, RetinaNet) can achieve those results. Looking at other issues and possible answers I tried using freeze-bn and tried changing the batch-size (4,8,16). I also tried decreasing the l.r to 1e-4 or 1e-5 but with no success. I tried again d0,d1,adn d2 models with and without autoaugmented weights as a starting point. I checked that my environment is set as other people suggested in this repository (tf version =1.15 and keras = 2.25). None of these solutions worked for me so I couldn't train any of the models so far. Do you have any idea what could cause this issue?

Thank you in advance for the answer.

xuannianz commented 4 years ago

Hi @DujeMedak , it's partially due to a stupid typo, I have fixed it. I'm still testing if there are other parts to improve. Sorry for the inconvenience.

DujeMedak commented 4 years ago

Hi @xuannianz , thank you for your quick response. I tried pulling the new code but the results are still the same. After training for 6-7 epoch the model reaches the mAP of around 0.3 and from that moment on there is no improvement. In the meantime I trained RetinaNet (implementation of fizyr that uses the same dataset settings) and that model achieves around 0.8 mAP. I also used this code: https://github.com/martinzlocha/anchor-optimization/ for anchor optimization and I was wondering if it is possible to somehow use this information to optimize anchor scales and aspect ratios in EfficientDet. Maybe the problem with my dataset is that I don't have proper anchors and thus the model can not make good prediction. What is your opinion on this? Thank you in advance.

xuannianz commented 4 years ago

Yes, anchors have a large effect on the final result. You can change the default anchor setting for your dataset if they does not match the gts well or comment out these two lines to make sure every gt box has a related anchor at least. https://github.com/xuannianz/EfficientDet/blob/8d700ba3f129d0f4afa73d5ab1fbab5bede5db8c/utils/anchors.py#L157-L158

DujeMedak commented 4 years ago

Hi @xuannianz , commenting out those lines you suggested or creating a custom default anchor parameters fixed the problem. I am able to achieve the expected results now. Thank you for your help. Is there any reason why these lines are not always commented out? Wouldn't that mean that optimal anchors are always used regardless of dataset?

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

SiBensberg commented 4 years ago

Hi @xuannianz and @DujeMedak, I made my own anchors with the Code Duje provided. And the results are way better, thank you for that. It detects little objects way better. But here is my Problem. The boxes are somehow really big now and do not tightly frame the object anymore. The boxes are roughly 2 times as big as the object. Regardless of the size of the picture

myAnchors = AnchorParameters(
    sizes=[32, 64, 128, 256, 512],
    strides=[8, 16, 32, 64, 128],
    # ratio=h/w
    ratios=np.array([0.634, 1, 1.577], keras.backend.floatx()),
    scales=np.array([0.4, 0.506, 0.641], keras.backend.floatx()),
)
stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

lucasjinreal commented 4 years ago

I still don't know how to generate the anchors and ratios, does there any snippets to do this?