jeffbass / imagenode

Capture and Selectively Send Images and Sensor Data; detect Motion; detect Light
MIT License
38 stars 19 forks source link

Implementation #23

Open mohan51 opened 1 year ago

mohan51 commented 1 year ago

how you implemented the imagenode in raspberrypi and what changes we need to do?I am testing this project using single raspberrypi as imagenode and my pc(linux os) as image hub.what changes do i need to do for running this project successfully using my resources?

jeffbass commented 1 year ago

The way to get started is with one Raspberry Pi (RPi) with Pi Camera sending images to one hub computer (Mac or Linux PC). It is easiest to start with the 2 computers on the same network. You can easily move them to different networks later. You do not need any other devices than a RPi with a camera (running imagenode.py) and a Linux PC (running imagehub.py). You will need networking hardware to connect the 2 computers, but it can be ethernet or WiFi.

So, start with 2 computers:

  1. RPi computer running imagenode.py Note that imageZMQ is NOT yet tested with Raspberry Pi OS Bullseye. I am waiting for a production replacement for the Python PiCamera module. imageZMQ runs with Raspberry Pi OS Buster and earlier.
  2. Linux PC running imagehub.py.

imageZMQ is a Python module that is imported by imagenode.py on the RPi and imported by imagehub.py on the Linux PC. So, imageZMQ must be running and tested on both the RPi and the Linux PC. imageZMQ is pip installable.

All the test programs and instructions are in imagezmq, imagenode and imagenub GitHub repositories. Here is the best way to get started:

  1. On your Linux PC, get imageZMQ test programs running. See the imageZMQ documentation in the imageZMQ GitHub library.
  2. Get imageZMQ test program 2 running with RPi sending and Linux PC receiving.
  3. Only AFTER you have the imageZMQ test programs running OK should you attempt to use imagenode and imagehub programs
  4. Run both the imagenode test programs for imagenode Test 1 on the same computer (it will need a screen, so use a Linux PC if you don’t have a screen on your RPi). Follow the instructions in the imagenode GitHub README file.
  5. Run the imagenode test 2 and test 3 using an RPi as the imagenode and a Linux PC as the receiver.
  6. Spend time experimenting with multiple settings in your imagenode.YAML file per the imagenode documentation. There are several example imagenode.yaml files in the imagenode repository.
  7. Finally, run the test programs in the imagehub GitHub repository. You will then have a working setup that you can tune and adjust by changing imagenode.yaml settings.

I always use a virtual environment for running all tests and production programs. That is discussed in the imageZMQ, imagenode and imagehub documentation. I do all my testing running the programs with python at the command line. But in my production setup, I use systemctl / systemd. I always start the imagehub program BEFORE starting the imagenode program. My imagehub program typically runs for many months (on a laptop running Linux) without restarting, but the imagenodes restart more often than that.

I also run my prototype librarian.py on the same Linux laptop that is running imagehub.py. My prototype librarian answers queries by reading the imagehub log file. My librarian prototype program is in my yin-yang-ranch Github repository. I am continuing to develop my librarian prototype, but it is not yet ready for pushing to Github yet.

mohan51 commented 1 year ago

for small poc can we neglect librarian and start to work on single node and single hub? and focus on transmitting the images? Firstly i want to setup for motion detector with one raspberry pi and send the detected images to hub.? can i start with this poc?

jeffbass commented 1 year ago

Setting up a poc does not require a librarian program. Motion detection and temperature sensors on RPi computers are what I use in my production system.

Using a single RPi running imagenode sending images to a single imagehub computer works well. The imagehub computer should be a Mac or Linux PC with a SSD disk. Using an RPI for an imagehub will not work because writing image files to a micro SD card is slow and can cause SD card failure. I have found that sending jpg files from the imagenode is faster and does not slow down the network as much. There is an imagenode.yaml option that specifies the sending of jpg files. Raw OpenCV image files are quite large; jpg files are much smaller.

When you run this "imagenode to imagehub" arrangement, you will use the imagenode.yaml file on the RPi to specify the camera and motion detector settings as well as the IP address of the imagehub. You can specify the jpg option in the imagenode.yaml file. The description of the imagenode.yaml file is in this imagenode Github repository.

There is an imagehub.yaml file on the imagehub computer which specifies the location of the event log and the image files directories. The documentation about those directories and files is in the imagehub Github repository. If you specify the jpg option on the imagenode.yaml file, you must also specify the jpg option in the imagehub.yaml file. Raw OpenCV image files are quite large and take a lot of disk space; jpg files are much smaller. I use jpg files in my own production system.

You can use any program to read the log and the image files that accumulate on the imagehub computer. My own prototype librarian program is located in the yin-yang-ranch Github repository. It reads the imagehub event log while it is being actively written by the imagehub program. This has worked well for me and allows me to read the imagehub events log in real time. I use SMS texting to query the imagenode log for recent motion detection events. My prototype librarian is a poc for reading the imagehub events log and does not have any image reading or image analysis code in it yet. There are many image analysis programs in tutorials and Github repositories available. I am experimenting with some of them. But reading the imagehub event logs is my day to day use of my imagenode --> imagehub system.

mohan51 commented 1 year ago

Hi Jeff,

We need to run imagezmq on both the node and hub? because when ever i try to run imagezmq on hub side.its just running and exiting without calling any methods in the imagezmq.

jeffbass commented 1 year ago

I am not sure what you mean by "run imageZMQ on hub side". imageZMQ is a Python module that is imported by a hub program on the hub computer. It is not a program that is run on the hub computer. imageZMQ is also imported by a sending program on the image sending computer. It is not a program on the sending computer. Before you can attempt running something such as an imagenode, it is important that you have imageZMQ test programs running correctly on both the image sending computer and the image receiving computer. The imageZMQ test programs are simple versions of the imagenode and imagehub programs. Did you run the tests in the imageZMQ repository? I recommend that you run the first 3 tests described in the README of that repository in order. Which imageZMQ test programs did you run? Which test program failed? What was the error message, if any? If you have not been able to get the imageZMQ test programs running correctly, please open an issue in the imageZMQ repository and I will try to help you there.

mohan51 commented 1 year ago

Hi Jeff, Imagezmq testcases an perfectly.but when i run (imagehub.py) in my linux and imagenode.py on rpi. they both are waiting and not executing anything. but imagenode should capture and sent to imagehub right? but the process is not happening

mohan51 commented 1 year ago

while running imagenode I am facing the below error

2023-06-12 12:15:51,966 ~ Starting imagenode.py 2023-06-12 12:15:52,027 ~ Unanticipated error with no Exception handler. Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 30, in main node = ImageNode(settings) # start ZMQ, cameras and other sensors File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 109, in init self.setup_cameras(settings) File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 268, in setup_cameras cam = Camera(camera, settings.cameras, settings) # create a Camera instance File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 863, in init self.cam = VideoStream(usePiCamera=True, File "/home/vfvi/img/env/lib/python3.9/site-packages/imutils/video/videostream.py", line 13, in init from .pivideostream import PiVideoStream File "/home/vfvi/img/env/lib/python3.9/site-packages/imutils/video/pivideostream.py", line 2, in from picamera.array import PiRGBArray ModuleNotFoundError: No module named 'picamera' 2023-06-12 12:15:52,031 ~ Exiting imagenode.py 2023-06-12 12:15:52,032 ~ SIGTERM detected, shutting down

jeffbass commented 1 year ago

I'm glad you got the imageZMQ tests cases to work. Thanks for sending the imagenode.py error messages.

Did you send images for imageZMQ tests using a PiCamera? That's the default in the imageZMQ tests 2 and 3. I presume you were able to send camera images and that you saw them on your hub PC OK? That would test that your camera was working OK on your imagenode computer. You need to run the imageZMQ tests on the same imagenode RPi computer and imagehub PC computer that will be running imagenode.py.

Your error message indicates that imagenode.py is not successfully importing the picamera module. Are you using a virtual environment? Did you do a pip install of picamera into that virtual environment? If you ran the imageZMQ tests using the picamera module OK, then you may be running imagenode.py in a different virtual environment or with a different linux path. Make sure you can import picamera correctly before you start imagenode.py. I find it helps to run Python from the command line in the same directory and virtual environment to verify that the import of picamera works OK. If you are using a USB camera instead of a picamera, you can specify that as an option for imagenode.py.

mohan51 commented 1 year ago

Hi Jeff, I resolved the Picamera issue but could you please suggest the changes i need to do on imaging.py side. should i need to comment the sensors? what changes do you suggest for making one hub and one node. 1) and also can i print the framerate? what is the usual frame rate for this? I am using raspberry pi 3 camera 5mp

mohan51 commented 1 year ago

getting error from imagenode like "Exception at sender.send_jpg in REP_watcher function".

I running the programs in Virtual environment only

mohan51 commented 1 year ago

I don't have any sensors.should i need any sensors for running the poc?

mohan51 commented 1 year ago

Exception at sender.send_jpg in REP_watcher function. --- Logging error --- Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 344, in send_jpg_frame_REP_watcher hub_reply = self.sender.send_jpg(text, jpg_buffer) File "/home/vfvi/img/env/lib/python3.9/site-packages/imagezmq/imagezmq.py", line 162, in send_jpg_reqrep hub_reply = self.zmq_socket.recv() # receive the reply message File "zmq/backend/cython/socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy File "zmq/backend/cython/checkrc.pxd", line 13, in zmq.backend.cython.checkrc._check_rc File "/home/vfvi/img/imagenode/imagenode/tools/utils.py", line 55, in clean_shutdown_when_killed sys.exit() SystemExit

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 38, in main hub_reply = node.send_frame(text, image) File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 347, in send_jpg_frame_REP_watcher self. fix_comm_link() File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 430, in fix_comm_link self.shutdown_imagenode() File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 449, in shutdown_imagenode sys.exit() SystemExit

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/handlers.py", line 73, in emit if self.shouldRollover(record): File "/usr/lib/python3.9/logging/handlers.py", line 192, in shouldRollover self.stream.seek(0, 2) #due to non-posix-compliant Windows feature RuntimeError: reentrant call inside <_io.BufferedWriter name='/home/vfvi/img/imagenode/imagenode/imagenode.log'> Call stack: File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 62, in main() File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 43, in main log.warning('SIGTERM was received.') File "/usr/lib/python3.9/logging/init.py", line 1454, in warning self._log(WARNING, msg, args, **kwargs) File "/usr/lib/python3.9/logging/init.py", line 1585, in _log self.handle(record) File "/usr/lib/python3.9/logging/init.py", line 1595, in handle self.callHandlers(record) File "/usr/lib/python3.9/logging/init.py", line 1657, in callHandlers hdlr.handle(record) File "/usr/lib/python3.9/logging/init.py", line 948, in handle self.emit(record) File "/usr/lib/python3.9/logging/handlers.py", line 75, in emit logging.FileHandler.emit(self, record) File "/usr/lib/python3.9/logging/init.py", line 1183, in emit StreamHandler.emit(self, record) File "/usr/lib/python3.9/logging/init.py", line 1083, in emit self.flush() File "/usr/lib/python3.9/logging/init.py", line 1063, in flush self.stream.flush() File "/home/vfvi/img/imagenode/imagenode/tools/utils.py", line 54, in clean_shutdown_when_killed logging.warning('SIGTERM detected, shutting down') Message: 'SIGTERM detected, shutting down' Arguments: ()

mohan51 commented 1 year ago

I cloned picam-motion-test.yaml to imagenode.yaml, for this i need to do any changes on imaging.py side?

mohan51 commented 1 year ago

and also you are giving default frame rate = 32, then how we need to calculate the fps from Rpi to hub(linux pc)?

mohan51 commented 1 year ago

Hi Jeff, I successfully ran the motion detector, but how to calculate fps ? and also how to detect the motion in the images, because when i saw the imagehub logs i found the messages regarding still and motion but the hub receiving all the images.how to filter the motion images in the hub, when i studying your repo , I came to know that the images which is used for motion detection will send to hub? right?

mohan51 commented 1 year ago

how to calculate frame rate on the receiving side.i think fps on sending side is 30 right? And also how to calculate the bandwidth?

mohan51 commented 1 year ago

I am able to achieve only 1.69 fps on the receiving side. is there anyway to increase the fps?

jeffbass commented 1 year ago

I see from your messages above that you have resolved your errors by using YAML settings in the imagenode.yaml file. The yaml settings are the way to manage imagenode behavior. You can delete sections or settings you don't need, or you can comment them out.

The frame rate setting in imagenode is used to set the frame rate on the camera. For example, when using the picamera, the frame rate sets how often the frame is captured. Setting the frame rate in imagenode.yaml only affects the camera capture rate, not the FPS throughput.

There is no FPS measurement in the imagenode or imagehub code. When I am testing a new setup, I have the imagenode send to this imageZMQ receive program that measures FPS.. Then I try various image sizes, color to grayscale changes, etc., to see what affects FPS. You may want to try that.

FPS is affected by many factors. The main one is image size. The next most important factor is jpg vs. raw OpenCV images.

I use 640x480 images in my own applications. I also use jpg rather than raw OpenCV images. Using 640x480 images sent as jpgs, I get image throughputs of 10-15 FPS. Jpg compression takes some time on the RPi side but it is faster for me, so I never use raw OpenCV images.

I have never used images larger than 640x480. I suspect your 1.69 fps may be related to a larger image size?

There is a faster alternative to OpenCV's jpg compression. One of the imageZMQ contributors provided this faster jpg sending program example. There is a matching faster jpg hub program. You may want to try that as well.

mohan51 commented 1 year ago

what is framerate parameter? because you have given 32? I am using 5mp raspberry pi camera module.but getting around 7-8fps only

mohan51 commented 1 year ago

I commented Sensor and Light classes in imaging.py then no image is transferring from node to hub. is it mandatory to include classes like sensor and light? although i don't have sensors and lights in my project.

jeffbass commented 1 year ago

Framerate parameter sets camera frame rate only. If the throughput fps is 7-8 only, that is a limit of jpg conversion, network speed, imagehub saving speed. The actual throughput frame rate is often lower than the camera frame rate i computer vision pipelines. You can set the camera frame rate to a lower value and it slows the pipeline down. For example, my water meter frame rate is set to 2.

If you comment out or remove the sensor and light sections of the yaml, that makes them loaded but unused. These are very small methods (only 1K bytes or less), so commenting them out in the source code doesn't change the size of the loaded modules much. But you can definitely comment them out in the source code if you want.

mohan51 commented 1 year ago

instead of using detectors can i use any ai models for motion or still recognization

mohan51 commented 1 year ago

Hi Jeff, is framerate is a time constant? if yes, then framerate: 32 is in seconds are milliseconds? and also for every 32ms is the camera captures the images?

jeffbass commented 1 year ago

Yes you can use any AI model of your choice for motion or still recognition. AI models are going to run pretty slowly on a Raspberry Pi computer, which is why my motion detector method uses frame differencing.

Frame rate or FPS (Frames Per Second) is a count of how many frames are captured by the camera in a second. So (1 / FPS) 1000 will be the time in milliseconds from the start of the capture of one frame to the start of capture of the next frame. For a frame rate or FPS of 32, the time in milliseconds is (1/32)1000 = 31.25 milliseconds. For an FPS of 2, the time is 500 milliseconds. For an FPS of 10, the time is 100 milliseconds.

For my own projects, I want to capture and process a series of individual image frames. Even when you JPG compress a series of individual image frames, you are transmitting each frame individually without any frame-to-frame video codec compression. If you want to have a compressed video stream, imageZMQ and imagenode are not appropriate tools. There are many video streaming programs that use a variety of video codecs. These video codecs do video compression that does not send a series of individual image frames, but instead sends reference frames and frame differences. It may that using a video streaming codec rather than sending a series of individual image frames is better for your project. In that case, you should use video streaming software rather than imageZMQ.

mohan51 commented 1 year ago

Hi Jeff when i connected my two raspberrypi's to my pc. one of the raspberrypi is showing like this if image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags'

mohan51 commented 1 year ago

can i connect and get all the two raspberry images? at a time? I am using threading here

mohan51 commented 1 year ago

when i replacing one picamera module with other picamera module v2, i am getting error like if image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags'

jeffbass commented 1 year ago

Yes, you can connect multiple raspberry pi computers and each one can have multiple cameras. The images on the imagehub side are labelled by the 'name' and 'viewname' values in imagenode.yaml. Look at the README of this repository. It shows how the images and image messages are labelled. It is the labels that allow you to separate & sort images from different raspberry pi's. There is also documentation in the imagehub repository that describes the labelling and directories of images and image messages.

I have not seen the error image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags'before. I will try to duplicate it on my raspberry pi setup. Can you please tell me: Type of Raspberry Pi (rpi 3; rpi 4) etc raspberry pi OS version python version openCV version pyZMQ version

If you can give me that information for both raspberry pi computers (both the one that is working OK and the 2nd one you added that is getting an error), I will try to identify the error.

(To get version of pyZMQ, OpenCV etc. use these commands:

pi@rpi31:~ $     # run the commands at a CLI prompt in your test directory; this is at home directory
(py3cv4) pi@rpi31:~ $ workon py3cv4   # this should be changed to the name of YOUR virtualenv
(py3cv4) pi@rpi31:~ $ python --version
Python 3.7.3
(py3cv4) pi@rpi31:~ $ pip freeze
imagezmq==1.1.1
imutils==0.5.4
numpy==1.20.2
opencv-contrib-python==4.1.0.25
picamera==1.13
psutil==5.8.0
PyYAML==5.4.1
pyzmq==22.0.3
RPi.GPIO==0.7.0
(py3cv4) pi@rpi31:~ $ 

Thanks for your help in tracking down this error. Jeff

mohan51 commented 1 year ago

Type of Raspberry Pi : Rpi4 raspberry pi OS version:Ubuntu python version: 3.8.10 openCV version:4.2.0 pyZMQ version:25.1.0

mohan51 commented 1 year ago

when i storing the images which are coming from two pi's are storing under single label name..I am unable to find the error why it is storing under single label.although i am receiving pictures from two nodes

mohan51 commented 1 year ago

and also getting this error-File "/home/bmohanakrishna/survi/env/lib/python3.8/site-packages/zmq/sugar/socket.py", line 302, in bind super().bind(addr) File "zmq/backend/cython/socket.pyx", line 564, in zmq.backend.cython.socket.Socket.bind File "zmq/backend/cython/checkrc.pxd", line 28, in zmq.backend.cython.checkrc._check_rc zmq.error.ZMQError: Address already in use (addr='tcp://*:5555')

mohan51 commented 1 year ago

is there any flexibility to change the port number because the default port number in imagezmq is 5555.

jeffbass commented 1 year ago

I have never run ubuntu on a Raspberry Pi, so I won't be able to help with that one.

Each raspberry pi imagenode MUST have a unique different name for the "name" field in its imagenode.yaml. That is how the images and image messages are labelled differently on the imagehub receiving side.

You must run one and only one imagehub program on the receiving computer. Your port error happens when 2 copies of imagehub are running on the hub computer. Many senders send to the same imagehub program. That's the way the system is designed.

If you want to change ports, you specify a different port in the instantiation of imageZMQ, which is set in the imagenode.yaml and the imagehub.yaml. The ports must be the same for imagenode and imagehub. But first make sure you are running only 1 imagehub.

mohan51 commented 1 year ago

Hi Jeff, I had one more doubt, in our project we need to implement PUB/SUB using imagezmq.how we need to implement? can we make REQ_REP=False in def init() method?should it works or else do you suggest any alternative?

mohan51 commented 1 year ago

Getting this error while implementing my poc in pub/sub method

Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/usr/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "receive1.py", line 40, in receive_frames receiver.send_reply(b'OK') File "/home/bmohanakrishna/survi/env/lib/python3.8/site-packages/imagezmq/imagezmq.py", line 311, in send_reply self.zmq_socket.send(reply_message) File "/home/bmohanakrishna/survi/env/lib/python3.8/site-packages/zmq/sugar/socket.py", line 696, in send return super().send(data, flags=flags, copy=copy, track=track) File "zmq/backend/cython/socket.pyx", line 742, in zmq.backend.cython.socket.Socket.send File "zmq/backend/cython/socket.pyx", line 789, in zmq.backend.cython.socket.Socket.send File "zmq/backend/cython/socket.pyx", line 255, in zmq.backend.cython.socket._send_copy File "zmq/backend/cython/socket.pyx", line 250, in zmq.backend.cython.socket._send_copy File "zmq/backend/cython/checkrc.pxd", line 28, in zmq.backend.cython.checkrc._check_rc zmq.error.ZMQError: Operation not supported

mohan51 commented 1 year ago

and one more thing, I saw you pub/sub vs req/res documentation, in that you mentioned that receive hub should have multiple connect_to calls for subscribing.right? but Imagehub has open_port, then how can we use connect_to in Image hub?

mohan51 commented 1 year ago

and also, In pub/sub i am facing Bad address (bundled/zeromq/src/tcp.cpp:254) error

jeffbass commented 1 year ago

PUB/SUB is one of the messaging patterns implemented in imageZMQ. There are examples in the imageZMQ repository. There are also several discussion about PUB/SUB in the issues section of the imageZMQ repository. I think those examples and discussions are the best way to get started using PUB/SUB.

However, imagenode and imagehub are designed around the REQ/REP messaging pattern. I don't believe they could be changed to PUB/SUB without a complete redesign. I use REQ/REP in imagenode and imagehub because it provides concurrency for multiple RPi's sending to one imagehub at the same time. I don't know of any easy way to convert them to use PUB/SUB.

jeffbass commented 1 year ago

I took a look at your code. It won't work for 2 reasons.

Reason 1: Your code to set the imageHub IP addresses for PUB senders won't work. Whether you are using REQ/REP or PUB/SUB, you can instantiate one (and only one) ImageHUB per program. In PUB/SUB mode, you must instantiate ImageHub using the open_port parameter for the FIRST sender PUB address. Then you add ADDITIONAL sender PUB addresses using a connect() method call. Here is an example from one of the imageZMQ test programs named t2_recv_images_via_sub.py:

import cv2
import imagezmq

# Instantiate and provide the first publisher address
image_hub = imagezmq.ImageHub(open_port='tcp://192.168.86.39:5555', REQ_REP=False) # first publisher address
image_hub.connect('tcp://192.168.86.38:5555')    # second publisher address
# image_hub.connect('tcp://192.168.0.102:5555')  # third publisher address
# image_hub.connect('tcp://192.168.0.103:5555')  # must specify address for every sender
# repeat image_hub.connect() statements as needed for all senders.

while True:  # show received images
    rpi_name, image = image_hub.recv_image()
    cv2.imshow(rpi_name, image)  # 1 window for each unique RPi name
    cv2.waitKey(1)

Reason 2: Your code will not send different incoming images to different OpenCV windows because you are not using the message text portion of the incoming (message, image) tuples. imageZMQ always sends (message, image) tuples. That is a very important design feature of imageZMQ. The message portion is the ONLY way to tell which sender the image is coming from. It looks like you were trying to use different threads to associate IP addresses with different RPi senders. That won't work. Instead, you need to have each different RPi sender set a unique text along with each image. I often use the names like RPi1, RPi2, RPi3, etc. But the text name that is sent BY THE SENDER is what will sort the incoming images to the correct openCV window. An easy way to identify sending computers is to use each computer's HOSTNAME. That is what I do in the imageZMQ simple example in the imageZMQ README file. Please look at that again. You will see that each of the 8 OpenCV windows has the HOSTNAME of a different RPi in the header portion of the image window. Start out by getting this technique to work with your own PUB/SUB setup. Here is an example from t2_send_images_via_pub.py:

import socket
import time
from imutils.video import VideoStream
import imagezmq

# Accept connections on all tcp addresses, port 5555
sender = imagezmq.ImageSender(connect_to='tcp://*:5555', REQ_REP=False)

rpi_name = socket.gethostname() # send RPi hostname with each image
picam = VideoStream(usePiCamera=True).start()
time.sleep(2.0)  # allow camera sensor to warm up
while True:  # send images until Ctrl-C
    image = picam.read()
    sender.send_image(rpi_name, image)
    # The execution loop will continue even if no subscriber is connected

In the above example PUB code, as long as the different RPi's have unique HOSTNAMES, the images in the cv2 windows will be sorted correctly. You may set the text portion of the (message, image) pairs in a different way, but it must be set on the PUB sender, NOT on the SUB imagehub receiver.

The message portion of each (message, image) pair is the ONLY way to identify which RPi is sending a particular image. Once an IP address is set via ImageHub instantiation or via the imagehub.connect() method, there is no further access to that information. Putting it in different threads won't help. All images are coming into the imageHub in a continuous stream from all sources. Images from all sources are mixed together. Depending on differing camera speeds and network adapter speeds on various RPis, the images may arrive in a very mixed order. ONLY the text portion of the (message, image) pair allows the images to be separated & sorted to different cv2 windows. This will work even if the RPi's have different camera frame rates. Your threaded approach won't work if the order of the images changes.

The fact that imageZMQ sends (message, image) tuples is the most important part of its design. It is what makes it fast. And imageZMQ uses the ZMQ messaging protocol system, which is also very fast. There are many other messaging protocols that could be adapted to sending images such as MQTT. But if you are going to use imageZMQ, you must set the identifying text message on the image sender. And you must use the identifying text in the received (message, image) tuple to sort the images by sender. This is true whether you use REP/REQ or PUB/SUB. If this is not something you want to do, then imageZMQ won't be appropriate for your project.

Assuming that your various RPi's sending images are using unique pi_names, then below is one way your code for subscriber.py might be modified to make it work. I am coding this WITHOUT threading. If you cannot get this to work without threading, then it will be impossible to get it to work with threading. This is just a code snippet as an example and is not a compete program. But I am using code from your program to demonstrate setting the PUB addresses and using the pi_name_from_sender to sort images from different senders into different cv2.windows.

# Create a list of Raspberry Pi names and their corresponding IP addresses
pis = {
    'pi2': '192.168.6.10',
    'pi1': '192.121.6.235'
    'pi3': '192.126.138.102'
}

# note that we will NOT use the Pi names above; we will use names that are received from senders
# but we will use the ip addresses from above

first_ip = True
for pi_name, ip_address in pis.items():  # we don't use pi_name from here
    if first_ip:  # use only the first_ip address to instantiate the imagehub receiver
        receiver = imagezmq.ImageHub('tcp://'+ip_address+':5555',REQ_REP=False)
        first_ip = False
    else:
        receiver.connect('tcp://'+ip_address+':5555')    # second and additional publisher addresses

# Display images with a separate window for each different pi_name received in the image stream
# Note that each time a new pi_name is received, a new OpenCV window will be opened.
#   (or you can explicitly use a set containing "pi_names_seen_so_far" and an if statement
#      to do cv2.namedWindow(pi_name) for pi_name_from_sender NOT in the set pi_names_not_yet_seen)
# It is faster to let cv2 dynamically create windows whenever it sees a new pi_name_from_sender
# OpenCV creates new windows by default in most versions of cv2.

while True:
    pi_name_from_sender, frame = receiver.recv_image()  # pi_name_from_sender MUST be used
    cv2.imwrite(image_filename, frame)  # I always include pi_name AND date time in my image_filenames
    cv2.imshow(pi_name_from_sender, frame)  # 1 window will be created for each unique pi_name_from_sender
    cv2.waitKey(1)

# get the above to work without threading or frame counting before adding them back
# NEVER put imageHub instantiation or imagehub.connect() in a thread. It won't work.

There has been considerable discussion of PUB/SUB in the imageZMQ repository issues. See issue 74 for example. Some problems with various kinds of PUB/SUB buffer overruns have been fixed with threading. See this SUB threading example in the imageZMQ docs.

It might help you to read the imageZMQ PUB/SUB docs here and here.

mohan51 commented 1 year ago

Hi jeff, Thank you putting me in right direction, i modified my code and i am unable to see log data on hub side. how to see that?

mohan51 commented 1 year ago

Hi Jeff, I have seen the above examples but my only doubt is they all are implemented in one pub and one sub, but when we need to implement in the real scenario how threads will encounter with multiple publishers and single subscriber?and also how these connect() calls are called in subscriber side using threading

mohan51 commented 1 year ago

we are planning to implement the streaming data on Android, is there any alternative protocol like imagezmq in android? is there any gradle dependency for imagezmq?

jeffbass commented 1 year ago

The imagehub logs and files are described in the imagehub docs here. The locations of all the log and image files are explained in that document.

In the imagehub program, the python logging module is used to log the event message text, which includes the name of the sending RPi. The logging module format provides date and time that are appended before the message text. The lines of imagehub code that write the text are lines 137-140 in hub.py in the imagehub repository.

The format of the log messages in in imaghub.py line 56.

jeffbass commented 1 year ago

My example above shows how to connect multiple publishers. The hub code does not need any Python threading or other modification. The hub code works with multiple RPis exactly as it is written above. I have run the example above with 3 RPis. It works fine. If (message, image) tuples are received from multiple RPis they are all added to the same stream of (message, image) pairs. If you run my example above it works with one sending RPi. It also works with 2 or more RPi's without changing a single line of code or adding any threading. As more RPis are started, new cv2 windows are opened and show the stream of images on the hub. One window for each sending RPi. The pi_name_from_sender creates the windows and sorts the images to the right one. Did you try running the code above with one hub and 3 RPis sending images?

ZMQ is internally multithreaded using C. All the images from all the publishers (the different RPi's) are put into a single stream on the hub as they are received. You don't have to change the code to add RPi publishers. Just have them start sending images and the hub will receive them. That is why the message part of the (message, image) tuple that is sent from each RPi publisher is so very important. The message part separates the stream of images from multiple RPis into the separate windows. Most of my imagehubs receive from 8 to 10 RPis. No threading in Python is needed.

In my imagehub repository, I use REQ/REP as a design choice. My imagenode repository also uses REQ/REP. But very little code would need to be changed to have imagehub behave as a subscriber in a PUB/SUB setup. Read the imagehub code and modify the places where ImageHub gets instantiated to make it similar to the ImageHub() instantiation and receive.connect() methods above. I cannot help you with those code modifications since I don't use PUB/SUB in imagehub and have no plans for doing so.

To get started, run my example code above that shows connecting to 3 RPis. Then start the sending publisher code on all 3 of the RPis. All their images will be received by the hub program. Once you have that working, use the code in my imagehub repository as a model and modify it. But get the example code above working first. If you cannot get that working with 3 RPi publishers sending to a single hub, then adding more complex hub code will not work.

jeffbass commented 1 year ago

Regarding your Android question, I won't be able to help. I have never written code for Android. I have never tested imageZMQ on Android. A search of "ZMQ and Android" will probably help find some place to start that is more suitable than imageZMQ. If you do get imageZMQ working with Android, post it as an issue on the imageZMQ repository so that other imageZMQ users can learn from it. I hope you find something that works for you.

mohan51 commented 1 year ago

Hi Jeff, we are working on your suggestions and also, i want to know how you are sending text via imagezmq i.e temperature and all other values? I think you are creating a tiny black image and sending text appended to it viza imagezmq. but on hub side where are you extracting the text from image?

jeffbass commented 1 year ago

Text messages are sent via imageZMQ using the usual (message, image) tuple. A tiny (3,3) image is set up in the __init__ module of imaging.py, line 51. Then sensor messages are created in imaging.py lines 663 to 669. The resulting (message, image) tuple is appended to the send_q in line 669. By treating the sensor messages just like any other (message, image) tuple, the code in both imagenode and imagehub is simpler. The same technique is used to send imagenode restart messages in imageing.py lines 154 to 163.

mohan51 commented 1 year ago

where do you extracting the message from images on imagehub side?

jeffbass commented 1 year ago

The text extraction code is in hub.py and is used to do 2 things:

  1. Determine the type of the (message, image) tuple: 1) very short test message or 2) "Heartbeat message" 2) raw OpenCV image or a 3) jpeg image or a 4) text message to send to log
  2. Take appropriate action based on (message, image) type

The code is in hub.py lines 118 to 140:

        if len(message) < 2:  # a "send_test_image" that should not be saved
            return b'OK'
        type = message[1]  # type is the second delimited field in text
        t0 = type[0]  # the first character of type is unique & compares faster
        if t0 == 'H':  # Heartbeat message; return before testing anything else
            return b'OK'
        node_and_view = message[0].strip().replace(' ', '-')
        # datetime.now().isoformat() looks like '2013-11-18T08:18:31.809000'
        timestamp = datetime.now().isoformat().replace(':', '.')
        image_filename = node_and_view + '-' + timestamp

        if t0 == "i":  # image
            pass  # ignore image type; only saving jpg images for now
        elif t0 == 'j':  # jpg; append to image_q
            self.image_q.append((image_filename, image, t0,))
            # writing image files from image_q is normally done in a thread
            # but for unthreaded testing, uncomment below to write every image
            # self.write_one_image()
        else:
            log_text = text  # may strip spaces later?
            self.log.info(log_text)
        return b'OK'

Very short or "heartbeat" messages are ignored. OpenCV raw images are also ignored because I always send jpg images in my own projects. The jpg images are appended to image writing image_q. The text only messages are written to the log (includes sensor messages, motion detection messages, etc.) The message types are sorted this way because that is how they are encoded in imagenode code.

Note that because I always use the REQ / REP messaging pattern, I always return b'OK'. That would not be done with a PUB / SUB messaging pattern.

mohan51 commented 1 year ago

Code sinppet on publisher side: while True: text = 'Hello World' image = np.zeros((100,300,3),dtype=np.uint8) cv2.putText(image,text,(10,50),font,1,(255,255,255),2,cv2.LINE_AA) sender.send_image('x',image)

Code snippet on subscriber side: while True: _, frame = receiver.recvimage() , jpeg = cv2.imencode('.jpg', frame) frame_bytes = jpeg.tobytes()

1) how can i extract text from image in subscriber side?