Open syvince opened 1 week ago
https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#module-streams
Sorry, I didn't understand. Do I need to push the detected frames to port 1935 through ffmpeg while running the program? The code is as follows:
from ultralytics import YOLO
import cv2
import subprocess
weight_path = "yolov8s.pt"
det_model = YOLO(weight_path)
camera_path = "rtsp://admin:abc123456@180.201.6.112/h264/ch1/main/av_strea"
# camera_path="test.mp4"
cap = cv2.VideoCapture(camera_path)
original_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
original_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
rtmp_url = 'rtmp://180.201.6.110:1938/video'
command = ['ffmpeg',
'-y',
'-f', 'rawvideo',
'-pixel_format', 'bgr24',
'-video_size', f'{original_width}x{original_height}',
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-pix_fmt', 'yuv420p',
'-preset', 'ultrafast',
'-tune', 'zerolatency',
'-f', 'flv',
rtmp_url]
proc = subprocess.Popen(command, stdin=subprocess.PIPE)
while cap.isOpened():
ret, frame = cap.read()
if frame.all() is None:
break
if not ret:
print("Opening camera is failed")
results = det_model.predict(frame,verbose = False)
annotated_frame = results[0].plot()
proc.stdin.write(annotated_frame.tobytes())
My go2rtc configuration is as follows:
streams:
hk: rtsp://admin:abc123456@180.201.6.112/h264/ch1/main/av_strea
webrtc:
listen: ":8555" # address of your local server and port (TCP/UDP)
rtmp:
listen: ":1938" # by default - disabled!
So we talking about incoming source: https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#incoming-sources RTMP is a terrible protocol. I recommend RTSP or HTTP-FLV. Maybe MPEG-TS. It is only important to create an empty stream in the config (just name with empty source). I have a bad experience using RTMP from FFmpeg.
I have YOLOV8 recognition, and I infer by reading the RTSP video stream. I want to merge the inference results into GO2RTC. How do I operate it.