Closed MichaelMonashev closed 3 years ago
I have to change MAX_DETECTIONS_PER_IMAGE in the efficientdet-pytorch code to predict all objects. How to increase this limit without your code patching?
There are module level constants defined for those limits right now, they are currently set at comfortable values for COCO, VOC, OI, etc
They can be changed on the anchor side and dataset side https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/anchors.py#L46 https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/loader.py#L12
Low priority TODO to move them into config.
Does this means if I have only 1 class, 1 object per image i.e, 1 bounding box, I should substitute the MAX_NUM_INSTANCES=1 in the dataloader?
If I do not do so will it hurt my training/validation accuracy?
Does this means if I have only 1 class, 1 object per image i.e, 1 bounding box, I should substitute the MAX_NUM_INSTANCES=1 in the dataloader?
No.
The changing of the dets_per image and max_num_instances varies the validation loss drastically. I am still finding the reason of the overfitting and validation loss divergence.
It would be of huge help is someone can throw light on this
Thank you, @rwightman .
How to set number of detections?