loiccordone / object-detection-with-spiking-neural-networks

Repository code for the IJCNN 2022 paper "Object Detection with Spiking Neural Networks on Automotive Event Data"
MIT License
57 stars 12 forks source link

About mAP result #6

Closed xxyll closed 1 year ago

xxyll commented 2 years ago

Hello, I want to discuss with you about COCO mAP results. I downloaded a train dataset, a valid dataset and a test dataset respectively 佛for my experiment. The default value of batchsize in your code is 64, but according to the actual capacity of GPU, I set up batchsize as 8 in train step and 16 in test step. My result of mAP after training is 0.056, and after testing is 0.016, whether such results depend greatly on batchsize?

ghost commented 2 years ago

7 Hello @xxyll I have seen that you have created an issue on the EOFError. This is what I am having right now. Could you please explain how you have solved this problem?

Regards,

loiccordone commented 2 years ago

Hello @xxyll, sorry for the delay I was on holidays. This kind of results is obviously not normal. It probably means that your training diverges right from the start, which happens when the initial learning rate is too large. But to answer your question I have not tested with smaller batch size. Since your batch size is smaller than the one I used, i think you need to adjust the learning rate to avoid the divergence. You could try to divide the default learning rate by 10 for example.

If it helps, here is my training loss and val mAP with default parameters and VGG-11 backbone. You can stop your training quickly if your loss doesn't drop below 1 before epoch 20.

loss

val_AP_IoU= 5_ 05_ 95

xxyll commented 2 years ago

Hi, I changed the learning rate to 0.0003, batchsize is also 8. After training with 50 epochs, the loss is 0.734, and the mAp is 0.141, but after testing the mAp result is 0.010. Does this mean that the change of learning rates have not worked? I only downloaded train_a, val_a, test_a datasets from the official website for experiment, does the amount of datasets also affect the results?

loiccordone commented 2 years ago

I can't answer with certainty, but if your training converges (which seems to be the case) and the test mAP is bad, it means the network slightly overfitted your train dataset. In my opinion, it means you don't have enough train data. Try using more training samples. The learning rate seems fine.