I've noticed that in a more recent squeezeDet implementation (python) there is a small difference in setting the anchors. Before when you had an image of size 512x256: H, W, B had to be 15, 31, 9.
Now, for the same image size, H, W, B has to be 16, 32, 9 (division by 16 apparently).
I am using the TF C++ api to test a frozen graph of my trained model, and the detection bounding box is now a little bit off compared to the detection in the python environment. I believe this is related to these anchor settings, but I'm not sure if this can be resolved outside of the C++ api. Any ideas?
Thanks!
EDIT: Issue resolved. The detection was correct, my post-processing wasn't.
Hello,
I've noticed that in a more recent squeezeDet implementation (python) there is a small difference in setting the anchors. Before when you had an image of size 512x256: H, W, B had to be 15, 31, 9. Now, for the same image size, H, W, B has to be 16, 32, 9 (division by 16 apparently).
I am using the TF C++ api to test a frozen graph of my trained model, and the detection bounding box is now a little bit off compared to the detection in the python environment. I believe this is related to these anchor settings, but I'm not sure if this can be resolved outside of the C++ api. Any ideas?
Thanks!
EDIT: Issue resolved. The detection was correct, my post-processing wasn't.