neuralmagic / deepsparse

Sparsity-aware deep learning inference runtime for CPUs
https://neuralmagic.com/deepsparse/
Other
3.04k stars 173 forks source link

Conversion and display of yolov8 rtsp video inference results #1654

Open wynshiter opened 5 months ago

wynshiter commented 5 months ago

Is your feature request related to a problem? Please describe. First of all, thank the developer for the example. Here is the frame rate that I want to test for real-time inference rtsp

import cv2
from ultralytics import YOLO
from deepsparse import Pipeline
from cv2 import getTickCount, getTickFrequency
#  YOLOv8
model_path = r"/mnt/d/code/c++/myyolo/deepsparse/model.onnx"

cap = cv2.VideoCapture("rtsp://172.20.64.1:8554/cam")
# Set up the DeepSparse Pipeline
yolo_pipeline = Pipeline.create(task="yolo", model_path=model_path)

while cap.isOpened():
    loop_start = getTickCount()
    success, frame = cap.read()  

    if success:
        results =yolo_pipeline(images=[frame]) 
    print(results)
    annotated_frame = # dont know how to write this part , in yolov8 we can use results[0].plot()

    loop_time = getTickCount() - loop_start
    total_time = loop_time / (getTickFrequency())
    FPS = int(1 / total_time)

    fps_text = f"FPS: {FPS:.2f}"
    font = cv2.FONT_HERSHEY_SIMPLEX
    font_scale = 1
    font_thickness = 2
    text_color = (0, 0, 255)  
    text_position = (10, 30)  

    cv2.putText(annotated_frame, fps_text, text_position, font, font_scale, text_color, font_thickness)
    cv2.imshow('img', annotated_frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()  
cv2.destroyAllWindows()  

Describe the solution you'd like

i got error like below

[h264 @ 0x8174fc0] Missing reference picture, default is 0
[h264 @ 0x8174fc0] decode_slice_header error
2024-06-17 01:53:51 deepsparse.pipeline WARNING  Could not create v2 'yolo' pipeline, trying legacy
DeepSparse, Copyright 2021-present / Neuralmagic, Inc. version: 1.7.1 COMMUNITY | (3904e8ec) (release) (optimized) (system=avx512_vnni, binary=avx512)
[h264 @ 0x8191d40] Missing reference picture, default is 0
[h264 @ 0x8191d40] decode_slice_header error
boxes=[[[1.4071826934814453, 0.03023386001586914, 29.676950454711914, 30.646894454956055], [0.08174419403076172, -5.963331699371338, 15.248629570007324, 22.094199657440186], [45.56520080566406, 24.2977352142334, 70.48969268798828, 33.44650840759277], [22.56393814086914, 12.133990287780762, 34.980445861816406, 16.698843955993652]]] scores=[[28422.794921875, 24517.25390625, 10659.7431640625, 5234.47705078125]] labels=[['8204.0', '3514.0', '8188.0', '6348.0']] intermediate_outputs=None
Traceback (most recent call last):
  File "/mnt/d/code/c++/myyolo/deepsparse/test_rtst.py", line 20, in <module>
    annotated_frame = results[0].plot()
AttributeError: '_YOLOImageOutput' object has no attribute 'plot'

I would like to know if we provide an API for directly plotting results What coordinate system are we using for the results of yolo? Where are the corresponding locations of these codes?

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

Additional context Add any other context or screenshots about the feature request here.

wynshiter commented 5 months ago

if print

 print(results.labels)

we got :what is stands for? [['8204.0', '3434.0', '8188.0', '6387.0']] [['8204.0', '3434.0', '8188.0', '6387.0']] [['8204.0', '3434.0', '8188.0', '6387.0']]

wynshiter commented 5 months ago

https://github.com/neuralmagic/deepsparse/blob/main/docs/use-cases/cv/object-detection-yolov5.md