niciBume / Cat_Prey_Analyzer

Cat Prey Image-Classification with deeplearning
MIT License
142 stars 22 forks source link

using CatPreyAnalyzer on bookworm #26

Open netphantm opened 5 months ago

netphantm commented 5 months ago

I use a 'RPi 4 Model B r1.5' and a picam for testing, which I will swap for the IR model later on (APKLVSR https://tinyurl.com/yslgfhlj). The problem is, picamera won't work on the new bookworm OS (with a 6.1.0-rpi8-rpi-v8 kernel):

Executing CatPreyAnalyzer
CatCamPy: /home/pi
Traceback (most recent call last):
  File "/home/pi/CatPreyAnalyzer/cascade.py", line 16, in <module>
    from CatPreyAnalyzer.camera_class import Camera
  File "/home/pi/CatPreyAnalyzer/camera_class.py", line 2, in <module>
    from picamera.array import PiRGBArray
  File "/home/pi/.local/lib/python3.11/site-packages/picamera/__init__.py", line 72, in <module>
    from picamera.exc import (
  File "/home/pi/.local/lib/python3.11/site-packages/picamera/exc.py", line 41, in <module>
    import picamera.mmal as mmal
  File "/home/pi/.local/lib/python3.11/site-packages/picamera/mmal.py", line 49, in <module>
    _lib = ct.CDLL('libmmal.so')
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/ctypes/__init__.py", line 376, in __init__
    self._handle = _dlopen(self._name, mode)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: libmmal.so: cannot open shared object file: No such file or directory
root@farnsworth:~$ vcgencmd get_camera
supported=0 detected=0, libcamera interfaces=0

After a lot of searching, I managed to get tensorflow to work with motioneye and a libcamerify wrapper script, by using TFLite_detection_stream.py, which gave me about 7.x FPS. I think that's not bad, it would be enough I suppose.. I didn't try to perhaps use TFLite_detection_webcam.py directly, on /dev/video0 yet. Here's the wrapper-script I used:

cat /etc/motioneye/motion_wrapper.sh 
#!/bin/bash
$(command -v libcamerify) $(command -v motion) "$@"

How would I go about using CatPreyAnalyzer on this OS? Maybe perhaps using the picamera directly and not through the motioneye stream, since that would add unnecessary overhead to the system. Let me know if you need more info.

netphantm commented 5 months ago

I'm not familiar with python, can this be perhaps made to use a webcam, like in TFLite_detection_webcam.py? this is what tensorflow1 is showing

pi@farnsworth:~/tensorflow1$ libcamerify python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model 
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[2:35:53.814977303] [22652] ERROR IPAModule ipa_module.cpp:172 Symbol ipaModuleInfo not found
[2:35:53.815042747] [22652] ERROR IPAModule ipa_module.cpp:292 v4l2-compat.so: IPA module has no valid info
[2:35:53.815125505] [22652]  INFO Camera camera_manager.cpp:284 libcamera v0.2.0+46-075b54d5
[2:35:53.845640625] [22677]  WARN RPiSdn sdn.cpp:39 Using legacy SDN tuning - please consider moving SDN inside rpi.denoise
[2:35:53.847852524] [22677]  INFO RPI vc4.cpp:447 Registered camera /base/soc/i2c0mux/i2c@1/ov5647@36 to Unicam device /dev/media2 and ISP device /dev/media0
[2:35:53.848701829] [22652]  INFO Camera camera.cpp:1183 configuring streams: (0) 640x480-RGB888
[2:35:53.849055714] [22677]  INFO RPI vc4.cpp:611 Sensor: /base/soc/i2c0mux/i2c@1/ov5647@36 - Selected sensor format: 640x480-SGBRG10_1X10 - Selected unicam format: 640x480-pGAA
[2:35:53.850122646] [22652]  INFO Camera camera.cpp:1183 configuring streams: (0) 640x480-RGB888
[2:35:53.850391272] [22677]  INFO RPI vc4.cpp:611 Sensor: /base/soc/i2c0mux/i2c@1/ov5647@36 - Selected sensor format: 640x480-SGBRG10_1X10 - Selected unicam format: 640x480-pGAA
[2:35:53.858056996] [22652]  INFO Camera camera.cpp:1183 configuring streams: (0) 640x480-YUYV
[2:35:53.858666600] [22677]  INFO RPI vc4.cpp:611 Sensor: /base/soc/i2c0mux/i2c@1/ov5647@36 - Selected sensor format: 640x480-SGBRG10_1X10 - Selected unicam format: 640x480-pGAA
[2:35:53.860154860] [22652]  INFO Camera camera.cpp:1183 configuring streams: (0) 640x480-RGB888
[2:35:53.860803500] [22677]  INFO RPI vc4.cpp:611 Sensor: /base/soc/i2c0mux/i2c@1/ov5647@36 - Selected sensor format: 640x480-SGBRG10_1X10 - Selected unicam format: 640x480-pGAA
[2:35:53.862041763] [22652]  INFO Camera camera.cpp:1183 configuring streams: (0) 640x480-RGB888
[2:35:53.862618201] [22677]  INFO RPI vc4.cpp:611 Sensor: /base/soc/i2c0mux/i2c@1/ov5647@36 - Selected sensor format: 640x480-SGBRG10_1X10 - Selected unicam format: 640x480-pGAA
[2:35:53.867362423] [22652]  INFO Camera camera.cpp:1183 configuring streams: (0) 1280x720-RGB888
[2:35:53.868047063] [22677]  INFO RPI vc4.cpp:611 Sensor: /base/soc/i2c0mux/i2c@1/ov5647@36 - Selected sensor format: 1920x1080-SGBRG10_1X10 - Selected unicam format: 1920x1080-pGAA
[2:35:53.869359880] [22652]  INFO Camera camera.cpp:1183 configuring streams: (0) 1280x720-RGB888
[2:35:53.870128853] [22677]  INFO RPI vc4.cpp:611 Sensor: /base/soc/i2c0mux/i2c@1/ov5647@36 - Selected sensor format: 1920x1080-SGBRG10_1X10 - Selected unicam format: 1920x1080-pGAA

then it could also be used by calling it with libcamerify

niciBume commented 5 months ago

Hi there,

I'm sorry but I've got no clue about the OS you are referring too. In general, as long as you can supply a a queue with images, then this repo can be adapted to it.

But to get specific hardware to run is out of this project's scope, sorry I can't help you out on this one.

Good luck!

netphantm commented 5 months ago

hi the OS is Debian bookworm, the current raspberry pi OS, on a raspi 4B

root@fry:~$ cat /etc/*release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
netphantm commented 4 months ago

The README says "The code is meant to run on a RPI4 with the [IR JoyIt Camera][...] attached." will it be at some time tested/adapted to run on the latest raspiOS?

netphantm commented 1 month ago

I am unfortunately not good with python to rewrite it myself.. perhaps it could also be made to run with a stream as input (I have a 'motioneye' running on the same host, to be able to use the cam in home assistant)? this one works (even if it usually recognizes my cat as a 'person'):

pi@farnsworth:~/tensorflow1$ python3 TFLite_detection_stream.py --streamurl=http://stream:somepass@localhost:9081/