Open KLelkov opened 6 years ago
@pec-orange Hi there, I was trying to do the same but I'm unable to :-(. were you able to do it?
@VellalaVineethKumar Hey! The very first thing you need to do is to get rtsp link to your camera's video stream. You can google how to get it for your exact camera module, or simply search it in its settings. In general, it looks like this rtsp://1. 10.2.0.10/live
You can check if your link is correct by opening it in VLC player (File -> play from URL). Or you can test with this free rtsp stream sample: rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov
Alright, after you got your link - you need to copy this example program: https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py
Check lines 13-14
# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)
This is how you get a video stream from webcam. But this does NOT work for IP cameras. There's the function you will need:
def open_cam_rtsp(uri, width, height, latency):
gst_str = ('rtspsrc location={} latency={} ! '
'rtph264depay ! h264parse ! omxh264dec ! '
'nvvidconv ! '
'video/x-raw, width=(int){}, height=(int){},
'format=(string)BGRx ! '
'videoconvert ! appsink').format(uri, latency, width, height)
return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
Now you can open your camera stream by providing url to rtsp stream as
video_capture = open_cam_rtsp("rtsp://192.168.1.4/live1.sdp", 1280, 720, 200)
The rest of the code should be pretty straightforward. Here you load your known-people-faces. Notice that "obama.jpg" is actually a path to the image. In this case, the image is located in the same folder as the python script.
# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
Let's say you have a photo of your friend Steve in "known_faces/steve.png". The code will be as follows:
steve_image = face_recognition.load_image_file("known_faces/steve.png")
Thanks for your quick response! when i run the facerec_from_webcam_faster.py from examples python@cse-w1:~/face_recognition-master/examples$ python facerec_from_webcam_faster.py
and i tried to print the return string in the function:
rtspsrc location=rtsp://admin:admin@10.2.3.177 latency=200 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, width=(int)1280, height=(int)720format=(string)BGRx ! videoconvert ! appsink
Traceback (most recent call last):
File "facerec_from_webcam_faster.py", line 56, in
my function call link="rtsp://admin:admin@10.2.3.10" video_capture = open_cam_rtsp(link, 1280, 720, 200) and i get ret value as false in the VideoCapture() .
in the return function i removed cv2.CAP_GSTREAMER in the retuen statement as i got an error when i ran on my webcam
please help!
@VellalaVineethKumar It looks like you are trying to resize the broken image in line 56.
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
Did you test your video stream before attempting face recognition? Via VLC player or simply by disabling image processing in your code (comment out lines 47-86 in the example), like this:
while True:
# Grab a single frame of video
ret, frame = video_capture.read()
# Display the resulting image
cv2.imshow('Video', frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break
Anyway, to solve the broke image processing issue you can add a try-except block:
try:
ret, frame = video_capture.read()
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
except Exception as e:
print(str(e))
By the way, what operating system you are running this on? My example works on Ubuntu 16.04 and uses GStreamer library...
@pec-orange With a few tweaks to the facerec_from_webcam_faster.py
I get 30FPS using the jetson tx2. However even before i modified that example i was still getting around 12-15fps. Did you set the performance to max with sudo nvpmodel -m 0
?
You might also need to set up a swapfile and overcommit your memory.
@EmpireofKings Thanks, that might be it. I think that my FPS was much lower because I was using fullHD IP camera instead of webcam and jetson TX1 instead of jetson TX2. Also, I found out, that my jetson TX1 had old CUDA version, so image processing acceleration with the graphic card wasn't working properly.
Thanks for the reply, I will try it.
Hello @pec-orange , just want to know how is your result to this attempt? Does nvpmodel make the performance better? And what is your configuration regarding this attempt? Look forward to it, thanks in advance :D
@idpdka Sorry, but I didn't test it. Regarding the configuration - I'm working with jetson Xavier right now and it is so much faster than TX2! I highly recommend it if your budget allows that kind of stuff.
I also tried to run some tests on jetson Nano - and I was able to get a stable 5-10 FPS.
Best of luck in your projects!
@VellalaVineethKumar Hey! The very first thing you need to do is to get rtsp link to your camera's video stream. You can google how to get it for your exact camera module, or simply search it in its settings. In general, it looks like this rtsp://1. 10.2.0.10/live
You can check if your link is correct by opening it in VLC player (File -> play from URL). Or you can test with this free rtsp stream sample: rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov
Alright, after you got your link - you need to copy this example program: https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_from_webcam_faster.py
Check lines 13-14
# Get a reference to webcam #0 (the default one) video_capture = cv2.VideoCapture(0)
This is how you get a video stream from webcam. But this does NOT work for IP cameras. There's the function you will need:
def open_cam_rtsp(uri, width, height, latency): gst_str = ('rtspsrc location={} latency={} ! ' 'rtph264depay ! h264parse ! omxh264dec ! ' 'nvvidconv ! ' 'video/x-raw, width=(int){}, height=(int){}, 'format=(string)BGRx ! ' 'videoconvert ! appsink').format(uri, latency, width, height) return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
Now you can open your camera stream by providing url to rtsp stream as
video_capture = open_cam_rtsp("rtsp://192.168.1.4/live1.sdp", 1280, 720, 200)
The rest of the code should be pretty straightforward. Here you load your known-people-faces. Notice that "obama.jpg" is actually a path to the image. In this case, the image is located in the same folder as the python script.
# Load a sample picture and learn how to recognize it. obama_image = face_recognition.load_image_file("obama.jpg")
Let's say you have a photo of your friend Steve in "known_faces/steve.png". The code will be as follows:
steve_image = face_recognition.load_image_file("known_faces/steve.png")
Just an additional note:
For anyone using Jetson platform, consider changing the above code from omxh264dec to nvv4l2decoder, because "The gst-omx plugin is deprecated in Linux for Tegra (L4T) Release 32.1. Use the gst-v4l2 plugin instead" as mentioned in the Accelerated Gstreamer User Guide
@VellalaVineethKumar It looks like you are trying to resize the broken image in line 56.
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
Did you test your video stream before attempting face recognition? Via VLC player or simply by disabling image processing in your code (comment out lines 47-86 in the example), like this:
while True: # Grab a single frame of video ret, frame = video_capture.read() # Display the resulting image cv2.imshow('Video', frame) # Hit 'q' on the keyboard to quit! if cv2.waitKey(1) & 0xFF == ord('q'): break
Anyway, to solve the broke image processing issue you can add a try-except block:
try: ret, frame = video_capture.read() small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) except Exception as e: print(str(e))
By the way, what operating system you are running this on? My example works on Ubuntu 16.04 and uses GStreamer library...
This might not be an ideal solution as I have variables in the try block which failed initialization :/ . Is there a better way to solve this issue?
Description
Is there any way to make the face detection part run faster? I'm running this for my rtsp camera stream and the video is very slow - each frame takes 2 seconds to process. I managed to accelerate my video stream to 3 FPS by using "cnn" face detection method, by resizing the processing frame by a factor of 0.33 and by moving all the processing function calls into the separate thread.
I measured time it takes to process one frame - and it is somewhere around 350 ms. But this is done in the separate thread (not the one that handles video display), so I don't understand why does this slow my video so much.
I am running this program on Nvidia Jetson TX1.
What I Did