Closed m0tmd closed 4 years ago
Hi~:)
from yolov4.tf import YOLOv4
yolo = YOLOv4()
yolo.classes = "coco.names"
yolo.make_model()
yolo.load_weights("yolov4.weights", weights_type="yolo")
yolo.inference(0, is_image=False)
from yolov4.tf import YOLOv4
import cv2
yolo = YOLOv4()
yolo.classes = "coco.names"
yolo.make_model()
yolo.load_weights("yolov4.weights", weights_type="yolo")
yolo.inference(
"/dev/video1",
is_image=False,
cv_apiPreference=cv2.CAP_V4L2,
# cv_frame_size=(640, 480),
# cv_fourcc="YUYV",
)
Awesome :+1: It's a straighforward way to implement yolov4 Is it possible to change the thresold for detection accuracy ?
Currently, there is a way to change the default value above. I will modify to be able to set in yolo.predict or yolo.inference.
Thanks for looking at this. Eventually, The possibility to change the frame rate would be nice as well. And an option to choose the classes to be detected in the frame could fill different development needs. I think those options would make it complete tool for easy different implementation purposes. But you already did nice works.. :)
Here's how I integrated on a video stream, if that helps:
from yolov4.tf import YOLOv4
import numpy as np
import cv2 as cv
yolo = YOLOv4()
yolo.classes = "coco.names"
yolo.make_model()
yolo.load_weights("yolov4.weights", weights_type="yolo")
cap = cv.VideoCapture('http://...')
while cap.isOpened():
ret, frame = cap.read()
if not ret:
print("Can't receive frame (stream end?). Exiting ...")
break
frame = cv.cvtColor(frame, cv.COLOR_BGR2RGB)
bboxes = yolo.predict(frame)
frame = cv.cvtColor(frame, cv.COLOR_RGB2BGR)
image = yolo.draw_bboxes(frame, bboxes)
cv.namedWindow("result", cv.WINDOW_AUTOSIZE)
cv.imshow("result", image)
if cv.waitKey(1) == ord('q'):
break
cap.release()
cv.destroyAllWindows()
nice piece of code :) I added image resizing, frame rate calculation, execution time, and then converted for tflite with yolov4-tiny :
from yolov4.tflite import YOLOv4
import cv2 as cv
import time
yolo = YOLOv4(tiny=True)
yolo.classes = "coco.names"
yolo.load_tflite("yolov4-tiny.tflite")
cap = cv.VideoCapture(0)
cap.set(cv.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv.CAP_PROP_FRAME_HEIGHT, 720)
while cap.isOpened():
ret, frame = cap.read()
tickmark=cv.getTickCount()
if not ret:
print("Can't receive frame (stream end?). Exiting ...")
break
frame = cv.cvtColor(frame, cv.COLOR_BGR2RGB)
prev_time = time.time()
bboxes = yolo.predict(frame)
frame = cv.cvtColor(frame, cv.COLOR_RGB2BGR)
image = yolo.draw_bboxes(frame, bboxes)
curr_time = time.time()
exec_time = curr_time - prev_time
info = "time: %.2f ms" %(1000*exec_time)
print(info)
cv.namedWindow("result", cv.WINDOW_AUTOSIZE)
fps=cv.getTickFrequency()/(cv.getTickCount()-tickmark)
cv.putText(image, "FPS: {:05.2f}".format(fps), (10, 20), cv.FONT_HERSHEY_PLAIN, 1, (255, 255, 0), 2)
cv.imshow("result", image)
if cv.waitKey(1) == ord('q'):
break
cap.release()
cv.destroyAllWindows()
Unfortunatly, with my old machine, it runs about 225 ms execution time per frame, wich is about 100 ms more than the detectvideo.py of hunglc007, with the same config.
Thanks @m0tmd.
When starting the optimization work, I will refer to detectvideo.py
. :)
If you plane to optimize, you may eventually be interested by this :
It's about defining a VideoStream class to handle streaming of video from webcam in separate processing thread on RPi.. But i don't know if it's really help for non single board computers.
Does tensorflow-yolov4 support video live stream from a webcam ? I had a look in the base_class.py, it seems not.
Thank you.