wb666greene / AI-Person-Detector

Python Ai "person detector" using Coral TPU or Movidius NCS/NCS2
17 stars 4 forks source link

question: node-red gui - how do you refresh detection image? #4

Open ozett opened 4 years ago

ozett commented 4 years ago

filling your issue-space only with questions, sorry. but your code-archtecture looks great, questions arise as studing it.

you send the filename and the image via mqtt to node-red (your code as below), i guess.

# send image for live display in dashboard
                if ((CameraToView == cami) and (UImode == 1 or (UImode == 2 and personDetected))) or (UImode ==3 and personDetected):
                    if personDetected:
                        topic=str("ImageBuffer/!" + filename + "_" + "Cam" + str(cami) +"_AI.jpg")
                    else:
                        retv, img_as_jpg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), 40])
                        if retv:
                            topic=str("ImageBuffer/!" + filename + "_" + "Cam" + str(cami) +".jpg")
                        else:
                            print("[INFO] conversion of numpy array to jpg in buffer failed!")
                            continue
                    client.publish(str(topic), bytearray(img_as_jpg), 0, False)

what is refreshing the node-red image viewer?

is it the incoming payload and settings on the node-red-viewer-gui-element?

i want to implement something simpler, so may i ask for your experience: if the image is already localy stored on the filesystem of the node-red system, would it be sufficient for refresh so send a mqtt payload to your image-gui-flow?

thx, great work. (i run now for testing a tensor-flow an CPU within a vmware-machine, grabbing one rtsp-stream from a hikvision surveillance cam. it meassure around 10 FPS. much space to improve)

ozett commented 4 years ago

i found this:

https://flows.nodered.org/flow/2b6c2f0d7a316f1a6831782d33a0d40c image

looks like the way you do it..

wb666greene commented 4 years ago

No problem with asking the questions Yes those last three nodes are the basic node-red flow to convert an image buffer to a display on the "dashboard" UI page.

There is a Pi_AI_Controller-Viewer.json sample file for the basic node-red interface. You can paste (import) it into a node-red flow tab and inspect or play with it. Questions about node-red are best asked on the node-red forum in the thread linked in the Readme file.

The idea is to minimize any customization to be needed in he Python code beyond setting some command line options for different situations in the start-up script. The local issues like camera names, region of interest filtering, etc. are moved to node-red. I'm not totally happy with the region of interest filtering being done in a Javascript function, but you really only have to edit/add the camera specific "box" or "polygon" points for each camera view.

That somewhat complicated if statement is to reduce the load on IOT class machines like the Pi4 It accepts settings from the node-red dashboard to reduce the data flow between the Python and node-red viewer. If your machine has a display and is fast enough use the -d 1 option in the start-up script if you want "live" images

ozett commented 4 years ago

i transported some snippets from your code in my simple project. i now have a "live" view in node-red like you did. pushing the image from the ai-maschine via mqtt to node-red flow.

i now see the advantace of doing it via mqtt. actually i do detection with tensorflow on a vmware virtual maschine. got 10 FPS on CPU. not bad, i guess.

but this way i can now try to do rtsp-depay and inference on one of my jetson nanos and see how it goes with fps. i must check out how i have to adopt the script for tensorRT, but seems not too complicated. via mqtt i can feed the live-view from everywhere. great.

next i will try to study your code further more, great bits and pieces of architecture so far (for me). i am very happy an pleased now to have a live-image in a browser window (with node-red, where i already have all my other hikvision-camera flows...)

thanks a lot. i will report more of success as long as it happend again... 😄

wb666greene commented 4 years ago

Thanks for the feedback. You using bits of my code for your project is the beauty open source.

I've only applied existing AI object detection models -- my Movidius, CPU, and TPU inference threads are straight from the example codes. My main contribution is the use of queues to tie together multiple camera types and AI sub-systems to feed the main thread.

May I ask what AI model you are planning to use? If its not MobileNet-SSD v1 or v2. please contribute it back so I can perhaps add it here as another AI thread option.

ozett commented 4 years ago

i cannot contribute something new, as i had good results with mobilenet-ssd v2.

image

it feels quite fast and i dont mind wrong categories, as long as an object is detected at all . i trigger the AI after the cameras report motion-detection via their alert-stream.

but after last experiments i consider next doing all frames.. but really next up is getting the nano doing rtsp-depay and AI and push images to node-red on the vmware maschine without changing much of the existing code.

as quick as i see a valuable contriubtion from my little project i will report my findings, but realisticly it could be take some time...

ozett commented 4 years ago

maybe worth to check it out: https://github.com/tensorflow/models/tree/master/research/object_detection#may-19th-2020

`

MobileDets outperform MobileNetV3+SSDLite by 1.7 mAP at comparable mobile CPU inference latencies. MobileDets also outperform MobileNetV2+SSDLite by 1.9 mAP on mobile CPUs, 3.7 mAP on EdgeTPUs and 3.4 mAP on DSPs while running equally fast. MobileDets also offer up to 2x speedup over MnasFPN on EdgeTPUs and DSPs.

`

wb666greene commented 4 years ago

I will look into it. MobilenetSSD+v2 was a big improvement. First I've hears of MobilenetV#, ot was that a typo?

ozett commented 4 years ago

https://github.com/tensorflow/models/tree/master/research/object_detection#oct-15th-2019

ozett commented 4 years ago

there is something like V3 for edge-TPUs... but MobileDETS (no typo) is another improvement, they say on that page...