wb666greene / AI-Person-Detector

Python Ai "person detector" using Coral TPU or Movidius NCS/NCS2
17 stars 4 forks source link

Frage - Wie bekommt man Infos aus Node Red zum laufenden Python Script? #12

Closed staebchen0 closed 1 year ago

staebchen0 commented 1 year ago

Hi, sorry if I'm just writing to you like that, but I came across a request [(https://github.com/namgk/node-red-contrib-pythonshell/issues/12)] from you on GitHub. I'm facing a similar problem, maybe you have a tip for me?

I monitor my cat flap with a usb cam to see with the help of a tflite model whether the cat has brought something or not. The program has been running without problems for a long time. However, now I only want to run the prediction if an external motion detector reports a movement. That's why I integrated my motion detector via Node Red.

Unfortunately, I don't know how to get the information from the motion detector in my running program. Unfortunately, my attempt with Powershell in Node Red has so far been unsuccessful. Red Node and the Python program run on a Pi 4. I haven't started the program via Red Node yet.

Unfortunately, I haven't gotten any further with goggles either. Do you maybe have a tip for me? How did you solve your "problem"?

Many greetings

ozett commented 1 year ago

I monitor my cat flap with a usb cam to see with the help of a tflite model whether the cat has brought something or not. The program has been running without problems for a long time.

did your AI detect the cat carying the mouse ? i am very interestd,because i a doing nearly the same without success..

Unfortunately, I don't know how to get the information from the motion detector in my running program

what is your "unning" programm? Python? node-red? tasmota? i guess, node-red and python-script could be integrated within node-red. may you have to give some more detailed information about the integration sides?

maybe an example for solution parts like MQTT, shell-cmds.. ? ...

image

staebchen0 commented 1 year ago

sorry in advance for my english!

Regarding your question, yes, the AI ​​recognizes if the cat has an animal in its mouth and then doesn't let it in ;-)

I trained the model with techable machine. It's very easy there :-)

My program is in Python and runs without Node Red.

I just wanted to try to start the KI ​​via an external motion detector, then the test would not have to run for a long time. But the program must already have started, I only have a few milliseconds for the test.

I've already tried it with Powershell, but nothing arrived in the Python program :-(

    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break

        #Bewegungmelder auslesen
        try:
            bewegungsmelelder = sys.argv[1]
            print("Status BW ", str(bewegungsmelelder))
        except IndexError:
            bewegungsmelelder = 0``

even if 1 is sent from node red, it doesn't arrive in the try loop

With a test script, the exchange works py Scrip:

`import sys

print ("This is the name of the script: ", sys.argv[0])

if sys.argv[1] == "1":
    print('Bewegung erkannt ') + sys.argv[1]

else:
    print('Keine Bewegung ') + sys.argv[1]`

Testflow Node Red: [ { "id": "1a9123b635bc1812", "type": "tab", "label": "Raspberry", "disabled": false, "info": "", "env": [] }, { "id": "2c4b9956.371766", "type": "inject", "z": "1a9123b635bc1812", "name": "", "props": [ { "p": "payload" }, { "p": "topic", "vt": "str" } ], "repeat": "", "crontab": "", "once": false, "onceDelay": "", "topic": "", "payload": "1", "payloadType": "str", "x": 150, "y": 200, "wires": [ [ "1919c9a5.8929a6" ] ] }, { "id": "e70314d0.69e5e8", "type": "debug", "z": "1a9123b635bc1812", "name": "", "active": true, "tosidebar": true, "console": false, "tostatus": false, "complete": "payload", "targetType": "msg", "statusVal": "", "statusType": "auto", "x": 580, "y": 200, "wires": [] }, { "id": "1919c9a5.8929a6", "type": "pythonshell in", "z": "1a9123b635bc1812", "name": "", "pyfile": "/home/pi/PycharmProjects/etgeTpuCoral/TestPowershell.py", "virtualenv": "", "continuous": false, "stdInData": false, "x": 370, "y": 200, "wires": [ [ "e70314d0.69e5e8" ] ] } ]

ozett commented 1 year ago

i have similar flows, one is to grab an image and feed it into my custom model over the cmd-line. looks like this:

image

another one is similar, but grabs 30s of video from the cam and the script with the AI model analyses that frame by frame.

your python code looks like you want to start Reading frames if the GPIO-PIN trigger within a python loop. i am not realy a coder, but i would try grabbing GPIO status somehow like here: https://stackoverflow.com/questions/49121040/trying-to-get-the-button-input-to-work-while-a-loop-happening

mosty of my AI now runs continously grabbing 5 fps from a lots of cams an doing AI analysis within frigate. maybe thats another option for you? -> https://github.com/blakeblackshear/frigate

if you stay with your trained model and the approach to analyse images, i would rather loop around the GPIO status , than saving images and doing analysis outside.

image https://tutorials-raspberrypi.de/raspberry-pi-gpio-erklaerung-beginner-programmierung-lernen/

heres the queue system and how wb666grene does it here in his repository https://github.com/wb666greene/AI-Person-Detector/blob/4efbdce2ff7141a881cfec0342d9ba124d33f28d/AI_dev.py#L582

that system can easyly stripped down to your needs , must be fairly simple for an experienced python-coder. without much coding skills i would try triggering the script from GPIO in node-red. -> https://nodered.org/docs/faq/interacting-with-pi-gpio

staebchen0 commented 1 year ago

hi, thanks for your tips!

I don't use a Raspi motion detector and therefore can't query the GPIO. The motion detector is integrated via my smart home.

I'll keep looking :-)