Open jp3spinelli opened 4 years ago
I have not yet tested this script yet, however "the last position of the scan loop before breaking" which I think it's the same cause of hangs & may be of same origin that I was looking for a resolution or answer.
I noticed servo oscillate badly in V1.2 in close vicinity with subject("Face" label with Tf option enabled by using --edge-tpu option with face label ) almost in less than 3' range).
I've changed this value for oscillate fix ;
tilt_p = manager.Value('f', 0.15) #to 0.10
It stoped oscillate in face tracking mode(label being "face") with Edge Tpu but control flow breaking is still there.
My setup; (Pi4, 4GB, 8/20/20 Buster, Same Pimorini Servo Hat, Clean Install only with RPI V1.2;
When object(face) goes out of FOV(beyond 90 degree servo max position or beyond about 8' in distance, cam sequentially moves to lower left -90 to right about 90 degree then does face up(toward ceiling) or face down(most of time) and hangs there(it still detect face if servo is forced to be positioned toward the face or face is placed toward close to camera whichever, but it still would not resume tracking).
I'd assume that camera would remain toward last known position and wait for certain time duration like 1 minutes or some pre defined time(I'm guessing exit point will be more likely to be same entry back point), if no object is detected within that time, cam goes back to servo neutral point of pan angle 0 & tilt angle (pre-defined angle) and resume tracking once object is back in FOV.
In my testing of RPI, When subject(Face) goes out of FOV, I think it only goes back to detection mode and not in tracking mode. but what strange about this is that sometimes it resumes tracking(few minutes & manual force focus and random occasion) but most of times it stuck in frozen position, not being able to track face.
Adrian's PID control, I think servo goes back to predefined neutral position & resume subject tracking upon subject detection.
Help will be appreciated from newbie
Hello jp3spinelli,
I ran your script and I'm getting following error;
(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track --edge-tpu face
Traceback (most recent call last):
File "/home/pi/.venv/bin/rpi-deep-pantilt", line 6, in
Corrected "ModuleNotFoundError: No module named 'RPi'" by running "pip3 install RPi.GPIO"
Upon Command: (.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track --edge-tpu face
Scanning(left & right Servo movement) is started but does not detect face nor stop toward face presented in front of camera, just continuing with none stop left & right servo movement.
Did I missed any step?
Hey @Martin2kid! How's the detection quality without pan/tilt tracking enabled? Do you have visuals on what the Pi is seeing?
$ rpi-deep-pantilt detect --loglevel=DEBUG --edge-tpu face
Leigh,
When I ran $ rpi-deep-pantilt detect --loglevel=DEBUG --edge-tpu face, I'm getting following error.
So I ran with face tracking and short video clip was uploaded for your referance here; https://youtu.be/LaLfU3RUlao (monitor was attached to Pi4 and Cli command was done over VNC--video clip is not shown over VNC connection). Setup; Pi4 4GB, Buster 8/20/2020, clean install only with RPI installed per instruction.--only changed P setting to stop oscillation of servo.
I think detection quality & speed is pretty good (better than using Intel NCS & OpenVINO with SSD model with similar PID setting but slower than using Caffe model without using PID, of course.
Only negative thing that I noticed and struggle to figure out is that when face goes out of FOV, RPI camera is positioning toward extreme up or down, hangs & does not resume face tracking (it still detect face though) when face is returned to FOV.
Error msg;
(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt detect --loglevel=DEBUG --edge-tpu face WARNING:root:Detecting labels: ('face',) INFO:root:loaded labels from /home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/data/facessd_label_map.pbtxt {1: {'id': 1, 'name': 'face'}} INFO:root:initialized model facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2
INFO:root:model inputs: [{'name': 'normalized_input_image_tensor', 'index': 7, 'shape': array([ 1, 320, 320, 3]), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.0078125, 128), 'quantization_parameters': {'scales': array([0.0078125], dtype=float32), 'zero_points': array([128]), 'quantized_dimension': 0}}] [{'name': 'normalized_input_image_tensor', 'index': 7, 'shape': array([ 1, 320, 320, 3]), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.0078125, 128), 'quantization_parameters': {'scales': array([0.0078125], dtype=float32), 'zero_points': array([128]), 'quantized_dimension': 0}}] INFO:root:model outputs: [{'name': 'TFLite_Detection_PostProcess', 'index': 1, 'shape': array([ 1, 50, 4]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:1', 'index': 2, 'shape': array([ 1, 50]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:2', 'index': 3, 'shape': array([ 1, 50]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:3', 'index': 4, 'shape': array([1]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}] [{'name': 'TFLite_Detection_PostProcess', 'index': 1, 'shape': array([ 1, 50, 4]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:1', 'index': 2, 'shape': array([ 1, 50]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:2', 'index': 3, 'shape': array([ 1, 50]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:3', 'index': 4, 'shape': array([1]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}] INFO:root:starting camera preview ^CException in thread Thread-2: Traceback (most recent call last): File "/home/pi/.venv/lib/python3.7/site-packages/picamera/mmalobj.py", line 1166, in send_buffer prefix="cannot send buffer to port %s" % self.name) File "/home/pi/.venv/lib/python3.7/site-packages/picamera/exc.py", line 184, in mmal_check raise PiCameraMMALError(status, prefix) picamera.exc.PiCameraMMALError: cannot send buffer to port vc.ril.video_render:in:0: Argument is invalid
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 865, in run self._target(*self._args, **self._kwargs) File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 180, in render_overlay self.overlay.update(self.overlay_buff) File "/home/pi/.venv/lib/python3.7/site-packages/picamera/renderers.py", line 449, in update self.renderer.inputs[0].send_buffer(buf) File "/home/pi/.venv/lib/python3.7/site-packages/picamera/mmalobj.py", line 1171, in send_buffer 'cannot send buffer to disabled port %s' % self.name) picamera.exc.PiCameraPortDisabled: cannot send buffer to disabled port vc.ril.video_render:in:0: Argument is invalid
Leigh,
I cloned Rpi-deep-pantilt from my Pi4 4GB & moved to Pi4 8GB 8-20-2020 Buster with Picam Noir for testing with Brushless FOC Gimbal setup with the Simplebgc32 controller board (I had to clone it because of pip install https://github.com/leigh-johnson/Tensorflow-bin/releases/download/v2.2.0/tensorflow-2.2.0-cp37-cp37m-linux_armv7l.whl no longer works).
In my test, my hardware setup with Simplebgc32 controller board tracks really well in smooth & fast motion, PID setting change or adjustment is now much more predictable but still hangs when face goes out of FOV--face detection continue when face is back in FOV but tracking is not resuming--.
For referance, my old video at; https://youtu.be/Ce-c9StqzsE which was based on Caffe model When compared to Rpi, Detection accuracy & tracking motion is about 20% better than this video with Rpi-deep-pantilt implementation to same device.
Hang behavior was same with Pimorini PanTilt hat setup, so I can conclude this is not PID setting nor Camera orientation related and thinking more toward of closed loop handling in Tf face detection input side.
I tried to test with Logitech C-920 USB Webcam but could not figure out how to improvise in Tf.(note this cam worked very well with OpenCV +OpenVINO+Caffe model and produced much better detection than Picam V2.1 which was too sensitive to background lighting changes).
Upon receiving constant low power status message from Pi4, (This was new behavior with Pi4 8GB with 8-20-2020 Buster and it was at 5.3Volt input setting with Coral USB connected, Pimorini Pantilt hat connected with servo motors, Cooling Fan connected).
I changed Pi4's power supply voltage to 5.5Volt & Rpi-deep-pantilt tracking became much more stable & preditable, with minor jitter or very little motion spike (I think Coral USB Stick drain & Pimorini's 2 servo motors power drain were probably the main cause of random jitter & spike that I've noticed in the past.
And ran "rpi-deep-pantilt detect --loglevel=DEBUG --edge-tpu face"
Detection quality excellent!!! --log as follows;
`(.venv) pi@raspberrypi:~/rpi-deep-pantilt $ rpi-deep-pantilt detect --loglevel=DEBUG --edge-tpu face
WARNING:root:Detecting labels: ('face',) INFO:root:loaded labels from /home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/data/facessd_label_map.pbtxt {1: {'id': 1, 'name': 'face'}} INFO:root:initialized model facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2
INFO:root:model inputs: [{'name': 'normalized_input_image_tensor', 'index': 7, 'shape': array([ 1, 320, 320, 3]), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.0078125, 128), 'quantization_parameters': {'scales': array([0.0078125], dtype=float32), 'zero_points': array([128]), 'quantized_dimension': 0}}] [{'name': 'normalized_input_image_tensor', 'index': 7, 'shape': array([ 1, 320, 320, 3]), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.0078125, 128), 'quantization_parameters': {'scales': array([0.0078125], dtype=float32), 'zero_points': array([128]), 'quantized_dimension': 0}}] INFO:root:model outputs: [{'name': 'TFLite_Detection_PostProcess', 'index': 1, 'shape': array([ 1, 50, 4]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:1', 'index': 2, 'shape': array([ 1, 50]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:2', 'index': 3, 'shape': array([ 1, 50]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:3', 'index': 4, 'shape': array([1]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}] [{'name': 'TFLite_Detection_PostProcess', 'index': 1, 'shape': array([ 1, 50, 4]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:1', 'index': 2, 'shape': array([ 1, 50]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:2', 'index': 3, 'shape': array([ 1, 50]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}, {'name': 'TFLite_Detection_PostProcess:3', 'index': 4, 'shape': array([1]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}] INFO:root:starting camera preview 2020-11-08 21:03:46.714688: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 307200 exceeds 10% of free system memory. DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13 DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 41 1216 2020-11-08 21:03:48.839553: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 307200 exceeds 10% of free system memory. 2020-11-08 21:03:48.966165: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 307200 exceeds 10% of free system memory. 2020-11-08 21:03:48.996462: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 307200 exceeds 10% of free system memory. 2020-11-08 21:03:49.030464: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 307200 exceeds 10% of free system memory. DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13 DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 41 1216 DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13 DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 41 1216 DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13 DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 41 1216 DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13 DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 41 1216 DEBUG:PIL.PngImagePlugin:STREAM b'IHDR' 16 13 DEBUG:PIL.PngImagePlugin:STREAM b'IDAT' 41 1216 Continues same----'
I added a scanning procedure in the manager.py script under the set_servos function which breaks upon detection and reverts to track mode. However, the track mode has a default position where the object in question is usually left out of the FOV. Here's my script:
How do I set the initial servo position once tracking begins to the last position of the scan loop before breaking?