jeffbass / imagezmq

A set of Python classes that transport OpenCV images from one computer to another using PyZMQ messaging.
MIT License
1.01k stars 160 forks source link

ImageHub object receiving too many (duplicate) image messages from a client? #59

Closed tzhivkov closed 3 years ago

tzhivkov commented 3 years ago

I am following the tutorial by Adrian Rosebrock: https://www.pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/

My issue is with how imagezmq receives image messages once ImageHub is initialized. For example, on the client side (client.py) I check the webcam FPS is 30, therefore I assume 30 image messages are being sent every second. But counting the number of images received by the server (server.py), I get variable FPS anything between 90-160 FPS (received images). All images are coming from the same camera and the count is restarted approx. every second. Of course, I expect some variability, but not this great. Has anyone tried to record the incoming frames and have they noticed anything similar? Perhaps my method for counting the incoming images is incorrect?

server_received_images

manual_fps_method

client_fps_count

jeffbass commented 3 years ago

I don't see enough code here to see where you are incrementing count_frame. Your frame counting code could have an issue, but I can't tell. Could you show the code where you are incrementing count_frame on the server side?

Here is an example of counting frames on the server side that works, from the imageZMQ tests. It starts counting from the first image that is received rather than using a delta_time count reset to zero. But you could add your delta_time calculation to it.

tzhivkov commented 3 years ago

Hi Jeff, thank you for the very fast reply! I looked at the example code you mentioned. I am getting 10 times the number of frames from the server.py script. But I'm not sure how or why, because according to the client, which is sending 30 image frames a second, it should be 30FPS. Is this happening because I'm not using a raspberry pi and camera or could it be a different reason?

Here is my full client and server code (which I updated from the one you mentioned).

Please note: I'm not using raspberry pi's with cameras, but my own laptop camera for testing. I will move on to using usb cameras after I get more of the functionality I desire.

Edit: I couldn't add the code using the "insert code" option for some reason. The formatting was off and I couldn't stand it. Instead I decided to use images, which show the code in a more clean way. Apologies again!

Client code: client_code

Server code: server_code1 server_code2

calculate_fps

tzhivkov commented 3 years ago

Hi Jeff,

Sorry for the constant messages. It seems that I found the main issue. It comes from the client side code. I will post the code below, which fixed my issue. Others might find this useful. However, I'm still interested in what is causing the behaviour. If anyone has any suggestions or can explain this behaviour I would really appreciate it!

It seems that each frame is being sent 10 times, the client was in fact sending 300 frames each second. However, using the in-built opencv function "_cv2.CAP_PROPFPS" misleadingly shows that according to the stream meta-data it should be running at 30FPS. But using numpy to compare two frames with the command "_np.all(frame == previousframe)" I find that many more frames are being read and it's not 30FPS, but ~300FPS.

I'm essentially discarding 90% of the read frames, but this shouldn't be happening? If anyone has any idea why this is happening, please let me know! I would like to fix this and remove the need to check for duplicate frames.

My full code is below:

'''
client.py is based on work by Jeff Bass (author of ImageZMQ) and tutorial by Adrian Rosebrock
Link: https://www.pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/

Code taken directly from the tutorial (with slight tweaks ~ Tiz) to get the basics working, before moving on to developing this into something useful.
'''

from imutils.video import VideoStream
import imagezmq
import argparse
import socket
import time
import cv2
from datetime import datetime
import numpy as np

#construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-s", "--server-ip", required=True, help="ip address of the server to which the client will connect")
args = vars(ap.parse_args())
#initialize the ImageSender object with the socket address of the
#server
sender = imagezmq.ImageSender(connect_to="tcp://{}:5555".format(args["server_ip"]))

#get the host name, initialize the video stream, and allow the
#camera sensor to warmup
devName = socket.gethostname()
#vs = VideoStream(usePiCamera=True).start()
#vs = VideoStream(usePiCamera=True, resolution=(320, 240)).start()
vs = VideoStream(src=0).start()
time.sleep(2.0)

#returns 30.0FPS, why are more images being sent?
fps = vs.stream.get(cv2.CAP_PROP_FPS)
print("FPS of current device: {0}".format(fps))

#control when timer starts
first_image = True
image_count = 0
#hold previous image
p_frame = np.empty(1)

try:
    while True:
        #read the frame from the camera and send it to the server
        frame = vs.read()
        #if there is no previous image saved
        if not p_frame.any():
            p_frame = frame
            continue
        #if a previous image exists
        if p_frame.any():
            #if image is duplicate...do nothing
            if np.all(frame == p_frame):
                print("caught a duplicate!!!")
                continue
            else:
                #USB camera requires manual resizing???
                #frame = imutils.resize(frame, width=320)
                sender.send_image(devName, frame)

                if first_image:
                    #added timestamp to calculate fps
                    start_timestamp = int(datetime.utcnow().timestamp())
                    first_image = False

                image_count += 1  #global count of all images received
                #clear previous image
                p_frame = np.empty(1)

except (KeyboardInterrupt, SystemExit):
    print("\nCaught ctrl+C, exiting...")

finally:
    #calculate the delta time
    end_timestamp = int(datetime.utcnow().timestamp())
    delta_time = end_timestamp - start_timestamp
    man_fps = image_count / float(delta_time)
    #print the approx. fps manually
    print("manually calculated fps: {0}".format(man_fps))

real_fps_client

real_fps_server

jeffbass commented 3 years ago

Thanks for your code examples. I believe you are experiencing a known behavior of the imutils Videostream class. Under some circumstances, it can send duplicate images. There are several items in the imutils issues about this. It is not really a bug, but part of the way imutils is reading images from the camera and updating images in a thread. That may be causing some of the duplicate images you are getting.

The first thing I would recommend that you try is to read your USB / laptop camera using OpenCV directly, rather than using the imutils VideoStream class. It may be the threading and updating in VideoStream that is causing your duplicate images on the client side.

To use OpenCV to read your camera directly, replace the VideoStream camera initialization code:

#vs = VideoStream(usePiCamera=True, resolution=(320, 240)).start()
#vs = VideoStream(src=0).start()  # replace this line ...
src = 0
vs = cv2.VideoCapture(src)  # ... with this one to use OpenCV VideoCapture directly
time.sleep(2.0)

And also replace the camera image reading code:

        #read the frame from the camera and send it to the server
        #frame = vs.read()  # replace this line...
        (frame_flag, frame) = vs.read()  # ...with this one to read the frames without threading

With these minor code tweaks, you will be using OpenCV to read the camera frames directly without the threading in imutils VideoSteam class. It may eliminate the duplicate frames. Try this and let me know what happens.

(Also, here is a suggestion to improve Python code readability when posting Python code in issues like this. GitHub code fencing / formatting does allow Python syntax highlighting; just add the word "python" immediately after the first ``` that starts the code block. You can even edit your issue above to try this, if you'd like to. Here is a GitHub tutorial page ).

tzhivkov commented 3 years ago

Thanks for your reply and for the helpful tips! I will try this later today and updated this comment with my results!

EDIT: I followed your advice Jeff, and it seems to have helped. The FPS has indeed reduced, but instead of 30FPS, I'm getting half of that. I don't think it is your package or ZMQ that is causing it. I'm not sure what is causing this behaviour, but I finally have time to actively look into this issue. I will close the issue for now and maybe re-open it and update the cause at a later time. Below I have added the updated code and console output for anyone interested.

Client code:

'''
client.py is based on work by Jeff Bass (author of ImageZMQ) and tutorial by Adrian Rosebrock
Link: https://www.pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/

Code taken directly from the tutorial (with slight tweaks ~ Tiz) to get the basics working, before moving on to developing this into something useful.
'''

#from imutils.video import VideoStream
import imagezmq
import argparse
import socket
import time
import cv2
from datetime import datetime
import numpy as np

#construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-s", "--server-ip", required=True, help="ip address of the server to which the client will connect")
args = vars(ap.parse_args())
#initialize the ImageSender object with the socket address of the server
sender = imagezmq.ImageSender(connect_to="tcp://{}:5555".format(args["server_ip"]))

#get the host name, initialize the video stream, and allow the
#camera sensor to warmup
devName = socket.gethostname()
#vs = VideoStream(usePiCamera=True).start()
#vs = VideoStream(usePiCamera=True, resolution=(320, 240)).start()
#vs = VideoStream(src=0).start()

#replaced imutils with opencv
src=0
vs = cv2.VideoCapture(src) #use VideoCapture directly
time.sleep(2.0)

#this is needed if "imutils" package is required
#fps = vs.stream.get(cv2.CAP_PROP_FPS)

#return FPS of videocapture (opencv2)
fps = vs.get(cv2.CAP_PROP_FPS)
print("FPS of current device: {0}".format(fps))

#control when timer starts
first_image = True
image_count = 0
# hold previous image
p_frame = np.empty(1)

try:
    while True:
        #read the frame from the camera and send it to the server
        #frame = vs.read()
        (frame_flag, frame) = vs.read()
        #if there is no previous image saved
        if not p_frame.any():
            p_frame = frame
            continue
        #if a previous image exists
        if p_frame.any():
            #if image is duplicate...do nothing
            if np.all(frame == p_frame):
                print("caught a duplicate!!!")
                continue
            else:
                #USB camera requires manual resizing???
                #frame = imutils.resize(frame, width=320)
                sender.send_image(devName, frame)

                if first_image:
                    #added timestamp to calculate fps
                    start_timestamp = int(datetime.utcnow().timestamp())
                    first_image = False

                image_count += 1  #global count of all images received
                #clear previous image
                p_frame = np.empty(1)

except (KeyboardInterrupt, SystemExit):
    print("\nCaught ctrl+C, exiting...")

finally:
    #calculate the delta time
    end_timestamp = int(datetime.utcnow().timestamp())
    delta_time = end_timestamp - start_timestamp
    man_fps = image_count / float(delta_time)
    #print the approx. fps manually
    print("manually calculated fps: {0}".format(man_fps))

Server code:

"""timing_receive_images.py -- receive and display images, then print FPS stats

A timing program that uses imagezmq to receive and display an image stream
from one or more Raspberry Pi computers and print timing and FPS statistics.

1. Run this program in its own terminal window on the mac:
python timing_receive_images.py

This "receive and display images" program must be running before starting
the RPi image sending program.

2. Run the image sending program on the RPi:
python timing_send_images.py

A cv2.imshow() window will appear on the Mac showing the tramsmitted images
as a video stream. You can repeat Step 2 and start the timing_send_images.py
on multiple RPis and each one will cause a new cv2.imshow() window to open.

To end the programs, press Ctrl-C in the terminal window of the receiving
program first, so that FPS and timing statistics will be accurate. Then, press
Ctrl-C in each terminal window running a Rasperry Pi image sending program.

Contributors:
Jeff Bass
Tim Sears

I made basic edits to add my own way of working out the FPS. ~Tiz
"""

import sys

import time
import traceback
import cv2
from collections import defaultdict
from imutils.video import FPS
import imagezmq
from datetime import datetime

# instantiate image_hub
image_hub = imagezmq.ImageHub()

image_count = 0
sender_image_counts = defaultdict(int)  # dict for counts by sender
first_image = True

try:
    while True:  # receive images until Ctrl-C is pressed
        sent_from, image = image_hub.recv_image()
        if first_image:
            fps = FPS().start()  # start FPS timer after first image is received
            # added my own timestamp to calculate fps
            start_timestamp = int(datetime.utcnow().timestamp())
            first_image = False
        fps.update()
        image_count += 1  # global count of all images received
        sender_image_counts[sent_from] += 1  # count images for each RPi name
        cv2.imshow(sent_from, image)  # display images 1 window per sent_from
        cv2.waitKey(1)
        # other image processing code, such as saving the image, would go here.
        # often the text in "sent_from" will have additional information about
        # the image that will be used in processing the image.
        image_hub.send_reply(b"OK")  # REP reply
except (KeyboardInterrupt, SystemExit):
    print("\nCaught ctrl+C, exiting...")
    pass  # Ctrl-C was pressed to end program; FPS stats computed below
except Exception as ex:
    print('Python error with no Exception handler:')
    print('Traceback error:', ex)
    traceback.print_exc()
finally:
    # stop the timer and display FPS information
    print()
    print('Test Program: ', __file__)
    print('Total Number of Images received: {:,g}'.format(image_count))
    if first_image:  # never got images from any RPi
        sys.exit()

    # calculate the delta time
    end_timestamp = int(datetime.utcnow().timestamp())
    delta_time = end_timestamp - start_timestamp
    man_fps = image_count / float(delta_time)
    fps.stop()
    # print the approx. fps manually
    print("manually calculated fps: {0}".format(man_fps))
    print('Number of Images received from each RPi:')
    for RPi in sender_image_counts:
        print('    ', RPi, ': {:,g}'.format(sender_image_counts[RPi]))
    image_size = image.shape
    print('Size of last image received: ', image_size)
    uncompressed_size = image_size[0] * image_size[1] * image_size[2]
    print('    = {:,g} bytes'.format(uncompressed_size))
    print('Elasped time: {:,.2f} seconds'.format(fps.elapsed()))
    print('Approximate FPS: {:.2f}'.format(fps.fps()))
    cv2.destroyAllWindows()  # closes the windows opened by cv2.imshow()
    image_hub.close()  # closes ZMQ socket and context
    sys.exit()

Server console output:

server_test_fps

Client console output:

client_test_fps