wb666greene / AI-Person-Detector

Python Ai "person detector" using Coral TPU or Movidius NCS/NCS2
17 stars 4 forks source link

reason for mqtt architecture for rtsp streams #1

Open ozett opened 4 years ago

ozett commented 4 years ago

hi, like your code and project very much. and first question as i look over your github: why do you push rtsp-images through mqtt and not (the ai-engine /some thread) - grab the images directly?

cheers, toz 👍

wb666greene commented 4 years ago

I use mqtt for several reasons: 1) as IPC to control the python code via other "home automation" interfaces. 2) as output for detection images to be processed by node-red for region of interest and other "filtering" for Email/mms and audio alerts. 3) as an alternative way to push images into the AI for processing.

The rtsp2mqtt.py was an attempt to put all the rtsp decoding onto a single i5 or i7 "NUC" and distribute the AI processing over multiple Raspberry Pi computers. It worked great feeding an i5 or i7 laptop, but the Pi and other IOT class small computers ended up having issues with network latency, so htis was not a solution for their weak rtsp decoding capability.

The Jetson Nano is the best ~$100 machine I've been able to find for decoding rtsp streams.

In "normal" use each camera gets an rtsp or onvif thread that grabs the images and pushes them to the AI thread via per camera queues.

ozett commented 4 years ago

thank for reply. i try to understand your design to improove my little system and i see your benchmarks as a really good proof for it all. great!

my understanding of mqtt was that is good and fast for small payloads. i use it for signaling in my setup. But is it also good to push decoded images of rtsp-streams through it?

(i must have another look into your code, if you really push all decoded frames through mqtt?! )

you decode all rtsp on the nano and send the decoded "stream" or single images (?) for inference via mqtt to the raspberries? or is inference also done on the nano after decoding rtsp? and you than send only detected images after inference through mqtt to the small devices?

i would be glad to understand your design a little bit better. looks realy promising. thanks for all hints. 🥀

wb666greene commented 4 years ago

I don't generally push all frames via mqtt, rtsp2mqtt.py was an attempt to do so, but the Pi class machines can't handle the network load. Another part of the motivation for rtsp2mqtt is that connecting to an rtsp stream can take from 6-20 seconds which is a real time waster during development -- it worked well when my desktop was receiving the frames via mqtt, but failed badly on Pi-class systems.

I only push AI detection frames to node-red as mqtt buffers, in theory this lets me distribute the processing, but in practice just using the localhost connection is best for Pi-class computers. This lets me get some "true parallelism" as node-red runs as a separate process that handles the region of interest filtering, data saving and notification sending.

ozett commented 4 years ago

thanks for reply, i will study deeper your rtsp2mqtt.py code... i made experiments with a jetson nano (got 2 apre lying around) to grab rtsp-streams with gstreamer and do inference on the nano. that worked well for me.

i wanted to get higher framrates on grabbing and inference, so i am looking for suitable architecture to fullfill this.

i am planing to deplay deepstream-sdk on the nano an see if that helps here. i am also planing to i install deepstream-sdk on a virtual ubuntu with a Geforce 1660 and see how fast that combination can grab rtsp-frams and do inference with certain AI-models. that will take a while, but the javiar nx-demos on youtube showing interesting performace. may also a way to go.

but the first small step now is to look into your code and see what suits to my little project and improves FPS for inference..

if you dont mind i may come back here if later on an import questiona arises again..

thank for all, toz