wb666greene / AI-Person-Detector

Python Ai "person detector" using Coral TPU or Movidius NCS/NCS2
17 stars 4 forks source link

I've been laid up for about a year and this system has continued to evolve, but with GitHub forcing TFA it has been troublesome to try and keep this repo up to date as my code evolved. I've started a new page with the updates I've been running 24/7/365 since just before I got laid up. Since the changes were extensive and I'm not a Git expert easier to make a new repo: https://github.com/wb666greene/AI-Person-Detector-with-YOLO-Verification/blob/main/README.md The major improvement was adding a YOLO verification inference when a person has been detected.

Consider this repo an archive in case you want to try it on some very weak systems that have little hope of running a YOLO model. This is the next step in the evolution of: https://github.com/wb666greene/AI_enhanced_video_security For some sample images of this AI in real-world action checkout the wiki: https://github.com/wb666greene/AI-Person-Detector/wiki/Camera-notes You can see the system in live action at: https://youtu.be/nUatA9-DWGY

The major upgrade is using the Coral TPU and MobilenetSSD-v2_coco for the AI. The Movidius NCS/NCS2 are still supported, but the .bin and .xml files for the MobilenetSSD-v2_coco model are too large to upload to GitHub.

The AI is pure Python3 code and should work on any system that can run a python3 version supported by Google Coral TPU or Intel OpenVINO installers with an OpenCV version capable of decoding h264/h.265 rtsp streams. If you have cameras capable of delivering "full resolution" Onvif snapshots, are using USB webcams, mjpeg stream cameras (motion or motioneyeOS), or the PiCamera module, then the h.264/h.265 decoding issue is moot.

Edit: 8Dec20 OpenVINO R2020.3 seems to have fixed the h.264/h.265 decoding issue, this is the last OpenVINO version to support the original Movidius NCS. OpenVINO R2021.1 now suppots Ubuntu 20.04, but drops support for Ubuntu 16.04, it also breaks loading of the bin and xml files I've made for the MobilenetSSD-v2_coco model. The TPU performance is so much better than the NCS/NCS2 in this application that I'm not moving beyond OpenVINO R2020.3 anytime soon.

New! TPU support upgraded for Google's recent PyCoral API which supports USB3, M.2, and MPCIe devices.

2MAY21 You can build a nice standalone system with Ubuntu 20.04 (I prefer the Mate flavor) and an old i3, i5 , or i7 laptop and the correct MPCIe/M.2 TPU devices and some IP Netcams or add it to a stand-alone security DVR/NVR system. Find out which type of MPCIe/M.2 slot your laptop has and order the correct TPU module (~$25). This will likely be the most difficult part of the endeavor. My T410 had a WiFi module in its "mini PCIe" slot, being only WiFi G no great loss to remove it and anyway you will want GB Ethernet to connect to the cameras or DVR/NVR rtsp streams My TPU code runs on Windows 10, but unfortunately it seems the MPCIe TPU support is broken on Windows as the Coral PCIe Accelerator Driver fails to start with error code 37 (wrong type) shown in device manager. I've opened an issue, but no response yet.

I'm working on a Wiki entry howto uisng my old Lenovo T410 i5-540m laptop as an example.

16APR21 AI_dev.py, Coral_TPU_Thread.py, and TPU.py have been modified to try the legacy edgetpu API first and if its not installed to try the new PyCoral API. I've verified the legacy code on my Ubuntu 16.04 i7 Desktop with a USB3 TPU, and I've tested the PyCoral support on a Ubuntu 20.04 i3-4025 with an MPCIe TPU module. The M.2 and MPCIe devices cost less than half the USB3 TPU! Lets hope M.2/MPCIe interfaces become common on the next wave of small IOT class machines.

Support for virtual PTZ using "fisheye" cameras.

30DEC20 The fisheyeTPU.py and fisheyeNCS.py files have been removed and merged into their respective TPU and NCS code. The NCS version supports multiple NCS sticks. Build the fisheye_window.cpp to create the virtual PTZ views. The build shell script uses the OpenCV version installed with OpenVINO R2020.3, explaining how to compile OpenCV applications is beyond my pay grade, StackExchange will become your new best friend.

The virtual PTZ code is derived from here: https://github.com/daisukelab/fisheye_window You can see a short video I made flying around a still image from a fisheye camera to set the virtual PTX views: https://www.youtube.com/watch?v=UJJPmdTFQfo

Unfortunately building the "maps" for the virtual PTZ view is terribly slow in Python, but they only need to be built once on start-up. With an AtomicPi fisheyeNCS.py and two NCS sticks gets ~11.1 fps for 4 virtual PTZ views from two rtsp fisheye cameras (2 virtual views per camera), a single NCS2 stick gets ~11.5 fps. Using fisheyeTPU.py with 8 virtual PTZ views from the same pair of fisheye cameras gets ~27.5 fps. If the fisheye.rtsp file is opened and the fisheyeN_map (N is the camera number) file exists, it is loaded instead of calculating the maps. If it doesn't exit, the maps for that camera are calculated and the fisheyeN_map file is created. This is a dramatic speed up in re-start time after the initial run for a fisheye camera setup, building a map in Python can take 4-20 seconds per view depending on the cpu, loading them all is < 1sec. If no fisheye.rtsp file exists, the new TPU/NCS code runs exactly the same as before.

Windows 10 Support for Coral TPU

Openvino has supported Windows 10 for a long time but for this application the Movidius NCS/NCS2 is far inferiour to the TPU. With the Google support for the TPU on Windows 10 check out the updated TPU.py and the Wiki instructions: https://github.com/wb666greene/AI-Person-Detector/wiki/NEW!-Windows-10-support-for-TPU

The fisheyeTPU.py code has been tested on Windows 10, and I've managed to compile fisheye_window.cpp on Windows 10 using the "free" Visual Studio 19 to set the virtual PTZ views. Again explaining how to set up and use Visual Studio is way above my pay grade. Open an issue if you are interested, and I'll send my vc "solution" project files. But I have no idea if they would work for you.

#

Notes from a "virgin" setup of Raspbian Buster Pi3/4, 22JAN2020

Install your OS using the normal instructions. I'll use a Pi3B+ and Raspbian "Buster" desktop (2019-09-26-raspbian-buster-full.zip) for this example. IMHO SD cards are cheap, so buy a big enough one to have a "real" system for testing and development, YMMV. With Buster the same card can be used in a Pi3 or Pi4.

Once you've done the initial boot/setup steps, here are some things I like to do that aren't setup by default. I assume you have a Monitor, Keyboard, and Mouse connected. IMHO its best to go "headless" only after everything is working. I'll just outline the basic steps, if Google doesn't give you the details raise an "issue" and we'll flesh out the details. Feel free to skip any you don't like.

These easiest to do via menu->Preferences->RaspberryPiConfiguration:

These steps are best done in a terminal window:

(or via ssh, I like ssh so I can cut and paste from my desktop with better resources for Google searches)

  1. turn off screen blanking, while it doesn't matter headless, I hate screen blanking while setting up and debugging:

    • sudo nano /etc/xdg/lxsession/LXDE-pi/autostart
    • Edit to add at the end, these two lines: @xset s off @xset -dpms
  2. setup samba file sharing:

    • sudo apt-get install samba samba-common-bin
    • sudo nano /etc/samba/smb.conf Edit these sections to match:

        [global]
            workgroup = your_workgroup
            mangled names = no
            ; follow symlinks to USB drive
            follow symlinks = yes
            wide links = yes
            unix extensions = no
      
        [homes]
            comment = Home Directories
            browseable = yes
            read only = no
            writeable = yes
            create mask = 0775
            directory mask = 0775
      • create samba password: sudo smbpasswd -a pi
  3. I find it useful to have the GUI digital clock display seconds to get an idea of the latency between the cameras and computer.

    • Opposite-click the clock and choose "Digital Clock Settings" from the popup menu.
    • Change %R to %R:%S in the dialog "Clock Format" box, click "OK" button.

Install node-red:

In general I recommend the Coral TPU over the Movidius NCS/NCS2, but since the Pi3 lacks USB3 it can't really take full advantage of it. Since I have both and the Python code supports both, I'll set up both. On the Pi in a terminal (or via ssh login):

Install OpenVINO for Raspbian:

And my model optimizer command (you need to chage the /home/wally for your system):

./mo_tf.py --input_model /home/wally/ssdv2/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /home/wally/ssdv2/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/wally/ssdv2/pipeline.config --data_type FP16 --log_level DEBUG
# R2021.1 model optimizer command:
python3 mo_tf.py --input_model /home/ai/ssdv2/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /home/ai/ssdv2/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/ai/ssdv2/pipeline.config --data_type FP16

At this point you now have a nice version of OpenCV with some extra OpenVINO support functions installed, EXCEPT the OpenCV 4.1.2-openvino has issues with mp4 (h.264/h.265) decoding, which breaks using rtsp streams! The Pi3B+ is not very usable with rtsp streams and the eariler OpenVINO versions that do work don't support the Pi4.

Update 6DEC20: The apt installation of OpenVINO release 2020.3 tested with Ubuntu-Mate 16.04 on an Atomic Pi (Atom Z8550) the h.264/h.265 decoding issue seems solved, its OpenCV version 4.3.0-openvino-2020.3 Also note that R2020.3 is the last release that supports the original NCS, R2020.4 and beyond only support the NCS2

Update 30DEC20: OpenVINO R2021.1 supports Ubuntu 20.04, but drops support for the original NCS. AI_dev.py has been modified to automatically use the IR10 model instead of the original IR5 model I used initially depending on OpenVINO version.
I also added GPU support (DNN_TARGET_OPENCL_FP16) and added results comparing NCS2, GPU and CPU on an i7-8750H ASUS Fx-705gm to the performance section at the end.

Setup the Coral TPU: https://coral.ai/docs/accelerator/get-started/

The OpenVINO version of OpenCV will work if your cameras do Onvif snapshots, or don't trigger the above mentioned h.264/h.265 decoding issues.

At this point you can download and run my Python code.

Need to do some node-red installation.

Now you can run the AI same as in before but leaving off the -l s option.

Node-red saves the detections which makes it easier to change the paths and add meaningful names for the cameras. You can also change -d 1 to -d 0 which will improve performance by skipping the X display of the live images. You can view them one camera at a time in the UI webpage. Viewing the UI webpage and modifying the node-red flow works best with a browser running on a different machine.

Real world advice.

Some performance test results: