Closed JAOT closed 1 year ago
Hi, haven't worked with a rpi, unfortunately no idea on tweaking the threading performance on it :/ Instead of capturing all the frames, we could try dropping some every now and then but as you said the accuracy would be low
Have you tried setting url = -1 for usb camera when using threading?
I am using URL = 0, because I am using the camera module, but can try - 1. I was unaware of that option, even after reading the articles.
Using it, I get stuck at "Starting the live stream." with threading on.
What would be the best way to drop every odd frame? That might be enough, despite being a significant delay. Using a time based approach might not be ideal in this regard.
@JAOT Sorry, have you figured out a solution? What I meant with dropping frames is passing in the argument --skip-frames e.g., --skip-frames 30
to increase the processing time and also the frames per second, the accuracy could decrease though
I did not. The raspberry pi, try as I might, is not up to the task. After multiple attempts, with smaller frames, color simplification, there isn't a possibility to make this work in such a device.
Hello! The system works, but I'd like to improve the performance, as when transitioning between states, there are a couple of seconds where the system freezes, most visible in "Detecting", where basically all frames are lost and unless the person moves really slowly, the system just doesn't register the person. And most times, it does not.
By activating threading, with an IP camera: using the IP, it gets stuck at :
Using the ip with video at the end, this more verbose happens:
And using the camera module:
Again, everything works if threading is set at False, but performance with a Raspberry Pi is poor. I tried to add cv2.waitKey(250) at line 89 so that there would be less frames captured, but I am not sure this could be a good approach, as faster moving people could be missed.
Is there a better way to control the number of frames the camera captures?
Thanks.