Closed mking1011 closed 5 years ago
Hi @mking1011, This certainly can happen if your model trainer with pictures in a wrong orientation due to an error in pre-processing code. It's also possible that there is a mistake in my code but I think I checked before that orientation of the input image is right. I'll check it again later.
In any case, if rotating the camera yields better results, you can simply modify the sourse code to rotate the input image to suitable angle.
Hi @Syn-McJ Maybe if you check it, you can see that reversing the camera will increase the recognition rate
But when I turned my camera, the box that displayed the area was in the wrong place
Hi @mking1011,
I rechecked both my example and one from TensorFlow repo. It seems like there are no mistakes with input orientation.
I also tried to rotate camera as you advised and even though I think I did noticed a tiny improvement in some cases, I don't think this is enough evidence to make any conclusions. Performance improvement was negligible if existed at all outside of my mind.
In any case, if you feel like different orientation helps in your case, you can modify rotation angle as you wish before feeding input to the detector.
I'll close this issue for now, feel free to reopen if you have concrete evidence.
Turning the camera upside down further increases recognition rates
Is that just me?