Open daniyal-ahmad-khan opened 1 month ago
π Hello @daniyal-ahmad-khan, thank you for sharing your issue with Ultralytics π! This is an automated response to help provide immediate guidance. An Ultralytics engineer will follow up soon to offer further assistance.
For new users, we highly recommend visiting the Docs where you can find many helpful Python and CLI examples along with answers to common questions.
If this is a π Bug Report, please provide a minimum reproducible example. This will help us identify and resolve the issue more efficiently.
For any custom training β Questions, including yours about GStreamer video lag, please ensure you include as much relevant information as possible, such as dataset image examples, training logs, and confirm you're following our Tips for Best Training Results.
π Consider upgrading to the latest ultralytics
package with:
pip install -U ultralytics
Make sure your environment meets all requirements in a Python>=3.8 setting with PyTorch>=1.8 to see if your issue has already been resolved in the latest version.
For real-time discussions, join us on Discord π§. For more detailed exchanges, find us on Discourse or check out our Subreddit.
You can run YOLO in any of these verified environments:
Ensure that you are using a CI-certified environment for your tests.
Thank you for your patience and understanding. More support will be available soon! π
"queue max-size-buffers=1 leaky=0 ! "
Try setting
leaky=2
have tried it with 1 and 2 same result unfortunately.
What's the inference latency?
5.6 ms inference total pipeline takes 0.02 seconds
Try:
import threading
import queue
import time
from ultralytics import YOLO
# Initialize YOLO model
model = YOLO("/home/abhi/Documents/roich-360/yolov8n.engine")
model() # Build predictor
#model.to("cuda") # Ensure the model is using GPU
# Define the optimized GStreamer pipeline for RICOH THETA camera
gst_pipeline = (
"thetauvcsrc ! "
"decodebin ! "
"videoconvert ! "
"video/x-raw,format=BGR ! "
"queue max-size-buffers=1 leaky=0 ! "
"appsink drop=true max-buffers=1"
)
# Initialize Video Capture
cap = cv2.VideoCapture(gst_pipeline)
if not cap.isOpened():
raise IOError('Cannot open RICOH THETA camera')
# Set frame width and height to reduce resolution
#FRAME_WIDTH = 640
#FRAME_HEIGHT = 480
#cap.set(cv2.CAP_PROP_FRAME_WIDTH, FRAME_WIDTH)
#cap.set(cv2.CAP_PROP_FRAME_HEIGHT, FRAME_HEIGHT)
# Thread-safe queue with max size 1 to hold the latest frame
frame_queue = queue.Queue(maxsize=1)
# Event to signal thread termination
stop_event = threading.Event()
def frame_capture():
"""Continuously capture frames from the camera and put them into the queue."""
while not stop_event.is_set():
ret, frame = cap.read()
if not ret:
print("Failed to grab frame")
stop_event.set()
break
# Resize the frame to reduce processing time
#frame = cv2.resize(frame, (FRAME_WIDTH, FRAME_HEIGHT))
# Put the latest frame into the queue, overwrite if necessary
if not frame_queue.empty():
try:
frame_queue.get_nowait()
except queue.Empty:
pass
frame_queue.put(frame)
def frame_detection():
"""Continuously get frames from the queue and perform YOLOv8 inference."""
while not stop_event.is_set():
start_time = time.time()
try:
frame = frame_queue.get(timeout=1) # Wait for a frame
except queue.Empty:
continue # No frame available, continue waiting
# Run inference
prep = model.predictor.preprocess([frame])
out = model.predictor.inference(prep)
results = model.predictor.postprocess(out, prep, [frame])
# Annotate frame
annotated_frame = results[0].plot()
# Display the frame
cv2.imshow("YOLO Inference", annotated_frame)
# Check for 'q' key press to exit
if cv2.waitKey(1) & 0xFF == ord("q"):
stop_event.set()
break
time_elapsed = time.time() - start_time
print(f"Inference time: {time_elapsed:.5f} seconds")
# Create and start threads
capture_thread = threading.Thread(target=frame_capture, daemon=True)
detection_thread = threading.Thread(target=frame_detection, daemon=True)
capture_thread.start()
detection_thread.start()
try:
# Keep the main thread alive while the other threads are running
while not stop_event.is_set():
capture_thread.join(timeout=0.1)
detection_thread.join(timeout=0.1)
except KeyboardInterrupt:
stop_event.set()
# Cleanup
cap.release()
cv2.destroyAllWindows()
π Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO π and Vision AI β
Search before asking
Question
I am running this code on jetpack5 the video pipeline does not have any delay without yolov8 but when I intgrate yolov8 there is a constant lag of 1s in the actual and processed feed, any idea whats wrong here? code:
Additional
No response