Open NicholaiStaalung opened 4 years ago
I didn't understand your question. Are you saying "when finetuning from that checkpoint the mAP for your training job look weird?"
Im saying that when i walk through individual predictions they are always scored as 50%. Which is odd in two ways
Maybe it is something with the activation function for the output. I have played with both SIGMOID and SOFTMAX. However it is kind of hard for me to A/B test.
I have recently trained new models where predictions are returned higher than 50%. But they are still for most returned as the same value across observations. I think its weird and not plausible.
I've been dealing with the same issue - what is the design choice here?
System information
Please provide the entire URL of the model you are using?
t https://storage.cloud.google.com/mobilenet_edgetpu/checkpoints/ssdlite_mobilenet_edgetpu_coco_quant.tar.gz
Describe the current behavior The model always provides 50 % as the output score for positive classfications in training. In inference on Coral EdgeTPU it is providing a larger range of scores.
Describe the expected behavior
Code to reproduce the issue
Other info / logs