jeffbass / imagezmq

A set of Python classes that transport OpenCV images from one computer to another using PyZMQ messaging.
MIT License
1.02k stars 161 forks source link

Reading video of different pi #10

Closed julio16101 closed 5 years ago

julio16101 commented 5 years ago

Hi, thanks for this library, it makes things easier. I have a question, we can see that it is possible to receive the raspberry pi video and visualize it in different windows on the server, but is it possible to individually manipulate each video stream from the different cameras?

Let's say I have the kitchen camera and another one in the living room, I want to apply facial recognition to the kitchen and detection of objects to the room, what is the method to individually access the video channel each place and manipulate them separately?

Thank you :)

jeffbass commented 5 years ago

The method to uniquely access the video channel of each place is to have each camera put a unique name like "kitchen" or "living_room" in the text portion of every imagezmq message that is sent from that camera.

As you can see from the picture in the Introduction of the README.rst, it is definitely possible to have different camera streams sorted into different windows or in different processing streams. What allows the image streams from different cameras to be sent to different processes is the text portion of each imagezmq message. Every message transmitted by imagezmq is a pair of items: (text, image). The text portion is what is used on the receiving server / imagehub to sort the images into different streams and processes. For example, in test program 2 sender (test_2_rpi_send_images.py), the Raspberry Pi hostname is used to tag each image uniquely (since every Raspberry Pi that I send from has a unique name):

rpi_name = socket.gethostname()  # send the unique RPi hostname with each image
picam = VideoStream(usePiCamera=True).start()
time.sleep(2.0)  # allow camera sensor to warm up
while True:  # send images as stream until Ctrl-C
    image = picam.read()
    sender.send_image(rpi_name, image)

And then, in the receiving program, the text portion of the received message pair is used to display the images from each uniquely named Raspberry Pi in a different window, as in test program 2 receiver (test_2_mac_receive_images.py):

while True:  # show streamed images until Ctrl-C
    rpi_name, image = image_hub.recv_image()
    cv2.imshow(rpi_name, image)  # 1 separate window for each unique RPi hostname
    cv2.waitKey(1)
    image_hub.send_reply(b'OK')

When you build your application, have each camera (say the kitchen or the living room) send a unique name with every image that is sent. Your text portion could be "kitchen" or "living_room". Then, in the receiving program, use that unique name to send the frames to the appropriate process.

The test program pairs in this imagezmq repository provide simple examples of this. My imagenode and imagehub repositories show more complete examples of this. Every imagenode uses a yaml file to assign a unique name to every camera. Then the imagehub uses that unique name to process each camera stream differently. You may want to look at that code as well.

Feel free to ask further questions in this issue thread. I'll do my best to answer them.

julio16101 commented 5 years ago

Thank you.

The idea had crossed my mind but I thought it was better to ask the question before, so I have not tried it yet.

One more doubt, the placement of a "filter" based on the name of the origin of the camera, will not return a bit slow video reception?

Thanks for answering.

jeffbass commented 5 years ago

I have not found the code which used the origin of the camera to be slow at all. The limit for my applications has been the network speed. You can look at my imagehub main() function to see what my filter code looks like.

julio16101 commented 5 years ago

It works correctly, thanks.