wb666greene / AI-Person-Detector

Python Ai "person detector" using Coral TPU or Movidius NCS/NCS2
16 stars 4 forks source link

Coral TPU Dev board #8

Open alexsahka opened 3 years ago

alexsahka commented 3 years ago

Any guide on how to use the Coral TPU Dev board (not the Coral USB Accelerator)? Thanks.

wb666greene commented 3 years ago

I've been running my system on the Coral TPU Dev board for almost a year now, it makes a nice stand-alone system, but it wasn't reliable until I put it on a UPS, as it seemed more sensitive to "power glitches" that my other systems. I recommend the Raspberry Pi4 power supply for it

The problem is that Google put a layer of "security" on it that makes a more difficult to set up. I had to start with a serial terminal and manually enter the ssh key.

The basic instructions to set it up are here: https://coral.ai/docs/dev-board/get-started/

Once you are logging in via ssh further installation is pretty much like any other command line Debian system, although the "Mendel" Linux they use has some quirks and is not well documented as far as I can tell. Not a good choice for Linux beginners.

One quirk is that the Mosquitto broker doesn't start automatically after rebooting so I had to start it with a root cron job: @reboot /usr/sbin/mosquitto >/dev/null 2>&1 &

A larger issue is the current Mendel system probably uses Google's "new" Python API which breaks the original. Since the two are apparently incompatible at the lowest level hardware library that uses the same name, so I couldn't get the two versions to co-exist. Modifying my code for the new API is on the ToDo list, but since they can't co-exist I haven't yet set up a system so I can do so.

But my code runs if you apt-get install the "legacy version": sudo apt-get install libedgetpu1-std sudo apt-get install python3-pycoral sudo apt-get install libedgetpu1-legacy-std sudo apt-get install python3-edgetpu sudo apt-get install libedgetpu1-legacy-max

The above is the installation sequence that worked for me, even though it installs and then uninstalls some things on a new Ubuntu 20.04 system.

Hope this helps. Here are some of my Coral Dev board results: 7DEC2019wbk Coral Development Board (HD & UHD rtsp streams are 3 fps each) 4 HD (1080p) : ~11.9 fps (basically processing every frame) 2 UHD(2160p) 2 HD : ~11.7 fps 2 UHD 3 HD : ~14.6 fps 2 UHD 4 HD : ~12.3 fps, -d 0 (no display) ~16.7 fps 3 UHD : ~8.8 fps (basically processing every frame) 4 UHD : ~0.1 fps on short run, System locks up eventually! 3 UHD 2 HD : ~0.27 fps Hopelessly overloaded, extremely slugglish. 6 HD : ~17.9 fps 8 HD : ~16.8 fps, -d 0 (no display) ~20.5 fps

I may have had to compile OpenCV for the TPU Dev board, as I said, its not a good board for Linux beginners. I'll see if I can find my notes about how I did it.

alexsahka commented 3 years ago

Thank you, very much! This gives me hope!

alexsahka commented 3 years ago

Do I need to complete these steps? image

alexsahka commented 3 years ago

Do you think this is a good guide on how to install OpenCV 4.0 on the Google Coral Dev board? Or this is too old?https://medium.com/@balaji_85683/installing-opencv-4-0-on-google-coral-dev-board-5c3a69d7f52f

wb666greene commented 3 years ago

Do I need to complete these steps? image

No they should already be in the Mendel system that you install to setup/update the Dev board.

As I said initially, you will probably have the new "python3-pycoral" which is in compatible with the python3-edgetpu package.

Assuming Google has the libedgetpu1-legacy-std packags in the Mendel repo, you should be able to remove the new libedgetpu1 and install the legacy version my code requiers with: sudo apt-get install libedgetpu1-legacy-std sudo apt-get install python3-edgetpu sudo apt-get install libedgetpu1-legacy-max

Modifying my code to use the new pycoral API is on the ToDo list but has no timetable, as I see only pain and no gain for doing so at the moment. If they'd made it so the two versions could co-exist, I think I'd have done it by now, it doesn't look to be too difficult, but breaking one of my existing systems to work on it is a hindrence.

wb666greene commented 3 years ago

Do you think this is a good guide on how to install OpenCV 4.0 on the Google Coral Dev board? Or this is too old?https://medium.com/@balaji_85683/installing-opencv-4-0-on-google-coral-dev-board-5c3a69d7f52f

Yes I believe that these are the same instructions that I followed, needing the SD card, activating swap, and compiling open-cv 4.0.0

I think I tried what was then the "newest" opencv version, 4.1.? initially, but the build failed, so I repeated it with 4.0.0 as used in these instructions and it has worked very well.

If the build can read h.264/h.265 rtsp streams correctly, my TPU code doesn't need anything that is not in opencv-3.3.0. Some of the opencv-openvino versions have had issues decoding h.265 rtsp streams as noted in my instructions.

My current TPU code has been running since 24SEP2020 on my Coral Dev Board, the cameras have dropped out a few times (power glitches?) during the interval (according to the log file) but the code has recovered each time and sent me an alert this morning when the mail arrived. I haven't touched the system since last restarted, I had to replace my router when it died suddenly and everything quit working back then.

alexsahka commented 3 years ago

Thank you for your quick response. I installed all requirements, a question about NodeRed and MQTT, is it sufficient for your code if I have NodeRed and MQTT server running on another host on my LAN? Or I have to install NodeRed and MQTT on the local Coral Dev board to satisfy your code?

wb666greene commented 3 years ago

It should work with node-red and and MQTT broker on a different hosts, although its not been exhaustively tested.

The --mqttBroker command line option should let you specify the name or IP of your MQTT broker host.

The node-red uses messages from the mqttBroker so you'll have to reconfigure the MQTT nodes if not using node-red on the local host. The real issue will be the start, restart (watchdog), and stop scripts, you'll have to re-write them to launch the commands using remote ssh commands.

There are potential advantages to having the node red store the detections on a different machine, but I've not pursued it since Node-red iand MQTT are so good with resources.

HTH.

alexsahka commented 3 years ago

specify MQTT broker

ap.add_argument("-mqtt", "--mqttBroker", default="localhost", help="name or IP of MQTT Broker")

This line ( 246 AI_dev.py) for MQTT Broker? Not secure connection? (no user name or password)

wb666greene commented 3 years ago

Correct. My system is designed to run on a private network where lack of physical access is the security. I push notification out via Email to MMS gateway of my phone provider. There is no access to my IOT devices unless you can physically plug a cable into the LAN side of its router/firewall.

You'll have to consult the Mosquitto and paho-mqtt python docs for adding password protection.

alexsahka commented 3 years ago

Thank you! Probably last questions... Installed everything, your code copied to /home/mendel/AI MQTT server address edited, Questions:

  1. Do I need to keep all files in the AI directory from your Github folder for the Coral Dev board?
  2. Where do I need to add the RTSP address of my cameras?
  3. How to start your code (command)?
wb666greene commented 3 years ago

1) No you only need TPU.py and the mobilenet_ssd_v2 folder. The other stuff is for NCS/NCS2/CPU support. AI_dev.py is my test code that lets TPU, NCS/NCS2, CPU all run together on the same set of cameras for testing and comparison.

2) You need to create a file with the rtsp stream urls and pass its name in with the -rtsp command line option. Look at samples/example_cameras.rtsp for an example. Its best to verify the URL by pasting it into VLC "Open Network Source". You need to use the embedded username/password format which is camera specific.

3) If you look at the PiTPU_startAI.sh sample script you can see an example of the command line to start it using a node-red exec node with node red running on the Dev Board, you remote SSH command will be similar. Basically: python3 ./TPU.py -d 0 -rtsp yourCameraURLfile 2>/dev/null &

You may need to make it script as I did to set the environment and paths etc.

alexsahka commented 3 years ago

Thank you for your response.

I modified NodeRed code and I can see all my cameras in the NodeRed GUI also I can start-stop code from NodeRed.

But I detting error like this: (Cam0:4775): Gdk-CRITICAL : 18:47:50.873: gdk_monitor_get_scale_factor: assertion 'GDK_IS_MONITOR (monitor)' failed.
or (Cam0:4775): Gdk-CRITICAL
: 18:48:14.611: gdk_monitor_get_workarea: assertion 'GDK_IS_MONITOR (monitor)' failed Alarm/ViewCamera: 1 ... 2021-01-28 18:54:37

I feel like I need to do more modifications to TPU.py file? I just added my local unsecured MQTT server.

Also, the code is not saving any screenshots. Do I need to create a special folder for screenshots? One more error: -bash: Alarm/ViewCamera:: No such file or directory

Do I need the AI_OVmt.py file? It is part of PiTPU_startAI.sh sample script. How about onvif.txt or snapshots.txt?

One more error: /home/mendel/AI/PiTPU_startAI.sh: line 20: /opt/intel/openvino/bin/setupvars.sh: No such file or directory

alexsahka commented 3 years ago

Nevermind, found out how to fix all of these issues. Code is working now, need to test reliability.

Thanks.

alexsahka commented 3 years ago

Streaming 6 RTSP streams from DAHUA cameras H264 704x480 5 FPS. And code is not reliably recognizing cars or persons so far... How to adjust code for better accuracy?

alexsahka commented 3 years ago

A lot of dropped frames on exit, is this normal?

RTSP stream sampling thread2 is exiting, dropped frames 35407 times. Coral TPU thread waited: 76 dropped: 0 out of 42345 images. AI: 29.93 inferences/sec [INFO] Program Exit signal received: 2021-01-28 19:52:00 AI processing approx. FPS: 29.93 [INFO] Run elapsed time: 1414.85 seconds. [INFO] Frames processed by AI system: 42345 [INFO] Main loop waited for results: 359 times. RTSP stream sampling thread5 is exiting, dropped frames 35607 times. RTSP stream sampling thread3 is exiting, dropped frames 59 times. RTSP stream sampling thread4 is exiting, dropped frames 27 times. RTSP stream sampling thread0 is exiting, dropped frames 112 times. RTSP stream sampling thread1 is exiting, dropped frames 102 times.

wb666greene commented 3 years ago

Nevermind, found out how to fix all of these issues. Code is working now, need to test reliability.

Thanks.

It would be nice if you could followup with what fixed your issues.

You do not need the AI_OVmt.py file as it is basically TPU.py but for using the NCS/NSC2. The sample node-red code tries to support both options. The setupvars.sh line is needed for the NCS/NCS2 OpenVINO support. It should have been commented out in the sample TPU startup script.

I started with the RaspberryPi and the original NCS stick, but the TPU is much better, especially when used with something stronger than the RaspberryPi4.

I think you will be happy with the reliability. I can unplug a camera and and nothing bad happens to any of the other cameras. When I plug the camera back it, it simply starts working again (unless your router changes its IP address).

I specifically ignore everything but detecting people. You need to modify the TPU thread in the for r in detection: loop and add some OR to the if r.label_id == 0: clause, with the numbers for the object from the labels.txt file.

I modified a version to use for license plate detection. coco label file indices for vehicles vehicle = (2,3,5,7) I then changed to if r.label_id in vehicle: inside the for loop.

I'm biased strongly against false alarms, you can lower the --confidence from the default 0.60 and the --verifyConfidence from the default of 0.70 on the command line.

Somewhat paradoxically I seem to get better detection with higher camera resolutions. Try changing a camera to 720p or 1080p. Camera resolution is handled automatically I've mixed 4K, 1080p and 720p, although a single 4K camera is about all the Pi4 can handle, I've gotten ~9 fps from three 4K (8 Mpixel) cameras with my Coral Dev Board.

The testDetection.jpg shows it is picking up people walking on the other side of the street with a 1080p camera, which I have to ignore as they are not on my property and thus not worthy of a notification. I have noticed that very high camera angles (close to looking straight down) reduces detection sensitivity -- my speculation is that its the perspective distortion from the "wide angle" lenses generally used in security cameras.

As to dropped frames, depending on your camera rtsp stream frame rate, lots of dropped frames are normal in the camera threads. Since you are getting ~30 fps processed by the TPU, I'd change the frame rate on each camera to 5 or 6 fps, saves network bandwidth if nothing else. Although seeing some cameras dropping 40K frames while others drop only 30 seems strange. Do some of your cameras have a really slow rtsp stream startup? Camera threads that are running will accumulate dropped frames while other camera threads are starting.

alexsahka commented 3 years ago

I've changed the start script as:

!/bin/bash

edit for directory of AI code and model directories

cd /home/mendel/AI

export DISPLAY=:0

export XAUTHORITY=/home/mendel/.Xauthority

should be clean shutdown

/usr/bin/pkill -2 -f "TPU.py" > /dev/null 2>&1

/usr/bin/pkill -2 -f "AI_OVmt.py" > /dev/null 2>&1

sleep 5

but, make sure it goes away before retrying

/usr/bin/pkill -9 -f "TPU.py" > /dev/null 2>&1

/usr/bin/pkill -9 -f "AI_OVmt.py" > /dev/null 2>&1

sleep 1

export PYTHONUNBUFFERED=1

necessary only if using OpenVINO cv2

source /opt/intel/openvino/bin/setupvars.sh

./TPU.py -cam snapshots.txt -d 1

./TPU.py -d 0 -cam onvif.txt >> ../detect//bin/date +%F_AI.log 2>&1 &

python3 ./TPU.py -d 0 -rtsp MYcameraURL.rtsp 2>/dev/null &

alexsahka commented 3 years ago

All of my cameras are Dahua with IVS (intelligent video surveillance), Basically, if the camera sees a moving object it triggers a signal called IVS and I capturing this signal with NodeRed and triggering recording on my Synology Surveillance System. Same time for control and history I recording this signal with an Influx database and I can review history with Grafana.

For the last night, I got around 14 IVS triggers from 6 cameras and I review them, only 1 false trigger - reflection, moving car headlights on my parked car roof, the rest are cats and rabbits in my backyard, and 2 times humans in the front. 0 triggers from the AI system. The question, what is needed to be adjusted in the code to see the same or controllable action from AI system at least? Adjusted before last night --confidence - 0.50 and the --verifyConfidence - 0.60.

Attached Grafana screen shoot from one of the backyard cameras. 2 times I triggered manually AI input just for testing, if the database is recording. (x2 green lines) image

wb666greene commented 3 years ago

I'm not at all familiar With that particular camera. Grafana means nothing to me

Here is a sample from my Coral Dev Board. Clearly it works very well.

DevBoardSample

Maybe post a sample image with a person you though it should have detected but didn't.

For me detecting and alerting on anything that is not a person is a failure, YMMV.