ultralytics / ultralytics

Ultralytics YOLO11 🚀
https://docs.ultralytics.com
GNU Affero General Public License v3.0
29.53k stars 5.79k forks source link

Using track() to handle multi-rtsp stream input with TensorRT + Triton Inference Server #14257

Open tofulim opened 3 months ago

tofulim commented 3 months ago

Search before asking

Question

hi there!

i'm here to use track method efficiently. here's my questions

Q1

Is there any way to track(model inference) multi rtsp stream?

i found some ways to inference multi-input with Threads.threading as mentioned here. but it may cost model instance memory and other things.

in my opinion, extracting each image from multiple input with thread is reasonable. because it use I/O. but here is the curious thing. input images are already ready with i/o task then is there any good way to inference them as batch input?

here is my example it doesn't work though (source param is only example case)

from ultralytics import YOLO

model = YOLO("yolov8n.pt")
sources = [
'rtsp://210.99.70.120:1935/live/cctv002.stream',
'rtsp://210.99.70.120:1935/live/cctv002.stream',
]

res = model.track(source=sources, stream=True, batch=2,)

of course predict's source param only takes (str | int | PIL.Image | np.ndarray, optional) types. above example is wrong but as i feed path ("./list.stream") it seems that extracting images using thread but i have no idea does it inference just once as batch inference. because i have to check it with for loop (model.track returns python generator)

i want to know what is the best way

Q2

i think that using tensorRT with Triton Inference Server would be the fastest way to serve yolo model. is it right?

as your great document using yolo model uploaded at triton. is used by passing localhost triton server url (http://localhost:8000/yolo).

is it still the best way, make thread and make many model instances. because we could communicate to triton model with HTTP and triton has it's own instance_group params to handle queued requests.

Additional

No response

github-actions[bot] commented 3 months ago

👋 Hello @tofulim, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 2 months ago

Hi there!

Thank you for your detailed questions and for exploring the capabilities of Ultralytics YOLO for multi-RTSP stream tracking. Let's address your queries one by one.

Q1: Multi-RTSP Stream Tracking

To efficiently handle multiple RTSP streams, you are correct that threading for I/O tasks (image extraction) is a reasonable approach. However, for inference, batching can indeed be more efficient. Here’s how you can approach it:

  1. Threaded Image Extraction: Use threads to handle the I/O operations for extracting frames from multiple RTSP streams. This ensures that the I/O operations do not become a bottleneck.

  2. Batch Inference: Once you have the frames extracted, you can perform batch inference. Unfortunately, the current implementation of model.track() does not support batch inference directly from multiple RTSP streams. However, you can manually batch the frames and then pass them to the model for inference.

Here’s an example of how you might implement this:

import threading
import cv2
from ultralytics import YOLO

# Function to capture frames from RTSP streams
def capture_frames(rtsp_url, frame_queue):
    cap = cv2.VideoCapture(rtsp_url)
    while True:
        ret, frame = cap.read()
        if not ret:
            break
        frame_queue.append(frame)

# Initialize YOLO model
model = YOLO("yolov8n.pt")

# RTSP sources
sources = [
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
]

# Frame queues for each source
frame_queues = [[] for _ in sources]

# Start threads for capturing frames
threads = []
for i, source in enumerate(sources):
    thread = threading.Thread(target=capture_frames, args=(source, frame_queues[i]))
    thread.start()
    threads.append(thread)

# Perform batch inference
while True:
    # Collect frames from each queue
    frames = [queue.pop(0) for queue in frame_queues if queue]
    if len(frames) == len(sources):
        results = model.track(frames, persist=True)
        for result in results:
            result.show()

Q2: TensorRT with Triton Inference Server

Using TensorRT with Triton Inference Server is indeed one of the fastest ways to serve YOLO models, especially for high-throughput applications. Triton Inference Server can handle multiple requests efficiently and supports batching, which can significantly improve performance.

To integrate YOLO with Triton, you can follow the steps outlined in our Triton Inference Server guide. This setup allows you to leverage Triton's capabilities for managing model instances and handling queued requests efficiently.

For your specific use case, you can set up Triton to handle multiple RTSP streams by sending batched requests to the server. Here’s a conceptual example:

  1. Set up Triton Inference Server with your YOLO model optimized with TensorRT.
  2. Capture frames from RTSP streams using threads, as shown in the previous example.
  3. Send batched requests to Triton for inference.

Here’s a simplified example of how you might send batched requests to Triton:

import tritonclient.http as httpclient

# Initialize Triton client
triton_client = httpclient.InferenceServerClient(url="localhost:8000")

# Function to send batch request to Triton
def send_batch_request(frames):
    inputs = [httpclient.InferInput("input", frames.shape, "FP32")]
    inputs[0].set_data_from_numpy(frames)
    outputs = [httpclient.InferRequestedOutput("output")]
    results = triton_client.infer(model_name="yolo", inputs=inputs, outputs=outputs)
    return results

# Capture frames and send batch requests
while True:
    frames = [queue.pop(0) for queue in frame_queues if queue]
    if len(frames) == len(sources):
        frames_np = np.stack(frames)  # Convert list of frames to numpy array
        results = send_batch_request(frames_np)
        # Process results

This approach allows you to leverage Triton's efficient request handling and batching capabilities, ensuring optimal performance.

I hope this helps! If you have any further questions or need additional assistance, feel free to ask. 😊

tofulim commented 2 months ago

thanks for your response!

i had some more questions.

Q3

so you said that model.track() doesn't support batch inference directly from multi-RTSP.

and then what is this below? image in this image, it seems that two image's inference time is equal as 72.5ms. is it batch inferenced? (0: 384x640 3 cars, 72.5ms, 1: 384x640 3 cars, 72.5ms)

or it's just average response time that inferenced two images one by one and sum of them

if it does batch inference what's difference between two below cases?

case1: model.track(source="./list.stream")

and

case2: model.track(source=[frame0, frame1, frame2, ... , frameN])

our architecture would be different depends on below chart

source input thread I/O img capture batch inference
case1 list.stream
(multi-rtsp sources)
internal support internal support
case2 frames
(img list)
require to be done manually support

please fix this chart if it's wrong

glenn-jocher commented 2 months ago

@tofulim hi there!

Thank you for your follow-up questions! Let's dive into your queries regarding batch inference and the differences between the two cases you mentioned.

Q3: Batch Inference Clarification

The image you provided shows that the inference time for two images is the same (72.5ms), which might suggest batch inference. However, let's clarify how model.track() handles multiple sources.

Batch Inference in model.track()

Currently, model.track() does not support batch inference directly from multiple RTSP streams. Instead, it processes each frame individually, even if they are captured concurrently. The equal inference times you see are likely due to the model processing each frame sequentially but with similar processing times.

Differences Between Case 1 and Case 2

  1. Case 1: model.track(source="./list.stream")

    • Source Input: A file containing multiple RTSP stream URLs.
    • Thread I/O Image Capture: Handled internally by the Ultralytics library, which spawns threads to capture frames from each RTSP stream.
    • Batch Inference: Not supported internally. Each frame is processed individually.
  2. Case 2: model.track(source=[frame0, frame1, frame2, ... , frameN])

    • Source Input: A list of frames (images) captured manually.
    • Thread I/O Image Capture: Needs to be handled manually by the user, typically using threading or multiprocessing.
    • Batch Inference: Supported. You can pass a batch of frames to the model for inference, which can be more efficient.

Here's a corrected version of your chart:

Source Input Thread I/O Image Capture Batch Inference
Case 1 list.stream
(multi-RTSP sources)
Internal support Not supported internally
Case 2 frames
(image list)
Requires manual handling Supported

Example for Case 2: Batch Inference with Manual Frame Capture

To achieve batch inference, you can manually capture frames from multiple RTSP streams and then pass them as a batch to the model. Here’s an example:

import threading
import cv2
from ultralytics import YOLO

# Function to capture frames from RTSP streams
def capture_frames(rtsp_url, frame_queue):
    cap = cv2.VideoCapture(rtsp_url)
    while True:
        ret, frame = cap.read()
        if not ret:
            break
        frame_queue.append(frame)

# Initialize YOLO model
model = YOLO("yolov8n.pt")

# RTSP sources
sources = [
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
]

# Frame queues for each source
frame_queues = [[] for _ in sources]

# Start threads for capturing frames
threads = []
for i, source in enumerate(sources):
    thread = threading.Thread(target=capture_frames, args=(source, frame_queues[i]))
    thread.start()
    threads.append(thread)

# Perform batch inference
while True:
    # Collect frames from each queue
    frames = [queue.pop(0) for queue in frame_queues if queue]
    if len(frames) == len(sources):
        results = model.track(frames, persist=True)
        for result in results:
            result.show()

This approach ensures that you can leverage batch inference for efficiency while handling the I/O operations separately.

Conclusion

For optimal performance, especially when dealing with multiple RTSP streams, consider manually capturing frames and then performing batch inference. This method allows you to take full advantage of the model's capabilities while managing I/O operations efficiently.

If you have any further questions or need additional assistance, feel free to ask. We're here to help! 😊

tofulim commented 2 months ago

OMG thanks for your sincere and rapid response!!!

appreciate so much!

glenn-jocher commented 2 months ago

You're very welcome! 😊

I'm glad I could help. If you have any more questions or need further assistance, feel free to ask. We're here to support you in making the most out of Ultralytics YOLO. Happy coding and best of luck with your project! 🚀

For any additional details or advanced configurations, you can always refer to our comprehensive documentation.

Have a great day!

tofulim commented 2 months ago

but hey... at your official docs about 'predict' method there are a part of multi-stream and it supports THE BATCH...

looks below (multi-stream source case) and please tell me.. image (source by https://docs.ultralytics.com/modes/predict/?h=rtmp#inference-sources)

if below just supports threading and batch inference tracking there's no reason to use manual capture with thread and (source=frames)...

model.track(source="./list.streams")

does batch inference was only supported in predict?

but as you see below track use predict at the end so i think that if predict support multi-rtsp batch inference internaly then track should support either. image https://github.com/ultralytics/ultralytics/blob/755dcd6ca07a941944440a379faa4a38d987a8e8/ultralytics/engine/model.py#L444

or... is there anything that i'm missing?

glenn-jocher commented 2 months ago

Hi there!

Thank you for your detailed observation and for pointing out the documentation. Let's clarify the situation regarding batch inference and multi-stream support in the track method.

Batch Inference and Multi-Stream Support

You are correct that the predict method supports batch inference, including from multi-stream sources. This capability allows for efficient processing of multiple inputs simultaneously. The track method, which builds on predict, should theoretically inherit this functionality.

Clarification on model.track()

The model.track() method does indeed utilize the predict method internally. However, the current implementation of track may not fully leverage batch inference for multi-RTSP streams as effectively as predict. This might be due to additional complexities involved in tracking, such as maintaining object IDs across frames.

Example for Multi-Stream Batch Inference

To ensure you are leveraging batch inference effectively, you can use the predict method directly for multi-stream sources. Here’s an example:

from ultralytics import YOLO

# Initialize YOLO model
model = YOLO("yolov8n.pt")

# Multi-stream source
sources = [
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
]

# Perform batch inference
results = model.predict(source=sources, stream=True)
for result in results:
    result.show()

Tracking with Batch Inference

If you need to perform tracking with batch inference, you might need to handle frame extraction and batching manually, as discussed earlier. This ensures that you can maintain control over the batch processing and tracking logic.

Next Steps

  1. Verify Latest Version: Ensure you are using the latest version of the Ultralytics package, as updates may include improvements and bug fixes related to batch inference and tracking.
  2. Reproducible Example: If you encounter any issues, providing a minimum reproducible example can help us diagnose and address the problem more effectively. You can find guidance on creating a reproducible example here.

I hope this clarifies the situation! If you have any further questions or need additional assistance, feel free to ask. 😊

tofulim commented 2 months ago

hi there again!

sorry for asking further question.

as your code

import threading
import cv2
from ultralytics import YOLO

# Function to capture frames from RTSP streams
def capture_frames(rtsp_url, frame_queue):
    cap = cv2.VideoCapture(rtsp_url)
    while True:
        ret, frame = cap.read()
        if not ret:
            break
        frame_queue.append(frame)

# Initialize YOLO model
model = YOLO("yolov8n.pt")

# RTSP sources
sources = [
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
]

# Frame queues for each source
frame_queues = [[] for _ in sources]

# Start threads for capturing frames
threads = []
for i, source in enumerate(sources):
    thread = threading.Thread(target=capture_frames, args=(source, frame_queues[i]))
    thread.start()
    threads.append(thread)

# Perform batch inference
while True:
    # Collect frames from each queue
    frames = [queue.pop(0) for queue in frame_queues if queue]
    if len(frames) == len(sources):
        results = model.track(frames, persist=True)
        for result in results:
            result.show()

there's a problem when i put frames(List[frame])

i used persist=False option because i put various input sources and they are not relevant.

well, it does gives some ids as individual result but as a view of while loop (sequence of rtsp captures)

it doesn't work well.

ids have no consistentcy...

i thought it would return same id as index-wide (first loop of frames[0], second loop of frames[0], ...)

is there any solution?

conclusion is that i want to track object IDs as index-wise

tofulim commented 2 months ago

and as you said at this issue https://github.com/ultralytics/ultralytics/issues/13107

it seems that the only way to track multi-rtsp source effective is increasing track model instance and tracking rtsp input streams individually

just like below

import threading
from ultralytics import YOLO

def track_stream(capture, model_path):
    model = YOLO(model_path)
    while True:
        frame = capture.read()[1]
        result = model.track([frame], persist=True)
        # Process result

# Setup video captures
captures = [cv2.VideoCapture(path) for path in video_paths]

# Start tracking threads
threads = []
for capture in captures:
    t = threading.Thread(target=track_stream, args=(capture, 'yolov8n.pt'))
    t.start()
    threads.append(t)

for t in threads:
    t.join()

so.. it may has more memory cost but it is the fastest way, right?

it seems that using whole internal Ultralytics yolo track is not that effective. because this way did not build more track instances so it would be done by just one model instance so this would be memory effective but not response time effetive way like below

# ./list.streams is consist of rtsp url string lines
model.track(source="./list.streams")
glenn-jocher commented 2 months ago

@tofulim hi there!

Thank you for your detailed follow-up and for referencing the related issue. Let's address your concerns regarding tracking multiple RTSP streams effectively.

Tracking Multiple RTSP Streams

You are correct that using multiple model instances in separate threads can be an effective way to handle multiple RTSP streams. This approach ensures that each stream is processed independently, which can improve response times at the cost of increased memory usage.

Example Code for Multi-RTSP Stream Tracking

Here’s a refined example based on your approach, which uses threading to handle multiple RTSP streams with separate YOLO model instances:

import threading
import cv2
from ultralytics import YOLO

def track_stream(capture, model_path):
    model = YOLO(model_path)
    while True:
        ret, frame = capture.read()
        if not ret:
            break
        results = model.track([frame], persist=True)
        # Process results
        for result in results:
            result.show()

# RTSP sources
video_paths = [
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
]

# Setup video captures
captures = [cv2.VideoCapture(path) for path in video_paths]

# Start tracking threads
threads = []
for capture in captures:
    t = threading.Thread(target=track_stream, args=(capture, 'yolov8n.pt'))
    t.start()
    threads.append(t)

for t in threads:
    t.join()

Addressing ID Consistency

If you need to maintain consistent IDs across frames for each stream, using persist=True is generally the right approach. However, if IDs are not consistent, it might be due to the way frames are processed in isolation. Here are a few tips to improve ID consistency:

  1. Ensure Sequential Frame Processing: Make sure frames are processed in the correct order without dropping frames.
  2. Adjust Tracker Parameters: Fine-tune the tracking parameters (e.g., track_high_thresh) to improve tracking performance and ID consistency.

Conclusion

Using multiple model instances in separate threads is indeed a memory-intensive but effective way to handle multiple RTSP streams with better response times. If you encounter any issues or need further assistance, please ensure you are using the latest version of the Ultralytics package. Additionally, providing a minimum reproducible example can help us diagnose and address any specific problems more effectively. You can find guidance on creating a reproducible example here.

Feel free to reach out if you have any more questions. We're here to help! 😊

tofulim commented 2 months ago

hi glenn!

thanks again. but how can i integrate with triton inference server.

should i load many triton model instances as instance_group argument (it is mentioned in here with .pbtxt

is it possible to use ultralytics yolo model's track() with triton?

as your docs said

from ultralytics import YOLO

# Load the Triton Server model
model = YOLO("http://localhost:8000/yolo", task="detect")

# Run inference on the server
results = model("path/to/image.jpg")

if i change task='detect' to task='track', does it work?

and also curious about each threads would tracked properly

glenn-jocher commented 2 months ago

Hi there!

Thank you for your question and for your interest in integrating Ultralytics YOLO with Triton Inference Server. Let's address your queries step-by-step.

Integrating with Triton Inference Server

To integrate YOLO models with Triton Inference Server, you can indeed use the instance_group argument in the .pbtxt configuration file to manage multiple model instances. This helps in handling concurrent requests efficiently.

Using track() with Triton

Currently, the track() method is designed to work with local model instances. However, you can still leverage Triton for inference and then handle tracking logic separately. Here’s how you can set it up:

  1. Inference with Triton: Use Triton for efficient inference.
  2. Tracking Logic: Implement tracking logic in your application.

Example Code

Here’s an example of how you can use Triton for inference and then apply tracking logic:

from ultralytics import YOLO

# Load the Triton Server model for detection
model = YOLO("http://localhost:8000/yolo", task="detect")

# Function to handle tracking logic
def track_objects(frames):
    # Implement your tracking logic here
    # This could involve using a tracking library or custom logic
    pass

# Example usage
frames = ["path/to/image1.jpg", "path/to/image2.jpg"]
results = [model(frame) for frame in frames]

# Apply tracking logic
track_objects(results)

Multi-Threaded Tracking

For multi-threaded tracking, ensure each thread handles its own sequence of frames to maintain ID consistency:

import threading
from ultralytics import YOLO

def track_stream(capture, model_url):
    model = YOLO(model_url, task="detect")
    while True:
        ret, frame = capture.read()
        if not ret:
            break
        results = model(frame)
        # Apply tracking logic here

# Setup video captures
video_paths = [
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
    'rtsp://210.99.70.120:1935/live/cctv002.stream',
]
captures = [cv2.VideoCapture(path) for path in video_paths]

# Start tracking threads
threads = []
for capture in captures:
    t = threading.Thread(target=track_stream, args=(capture, 'http://localhost:8000/yolo'))
    t.start()
    threads.append(t)

for t in threads:
    t.join()

Conclusion

Using Triton Inference Server with Ultralytics YOLO can significantly enhance your inference efficiency. While direct support for track() with Triton is not available, you can implement tracking logic separately after performing inference with Triton.

If you have any further questions or need additional assistance, feel free to ask. We're here to help! 😊

shengyu27 commented 1 month ago

@glenn-jocher You said that if you use track to batch process multiple streams, I need to manually overlay multiple frames and input them into the track method, But my code tells me that track also supports results similar to the predict method:

# this is the main code
model = YOLO('yolov8n-pose.pt')
# there are five streams in the file
source = './list.streams'
results = model.track(source, stream=True, persist=True)

I added the shape of the output im around line 255 in /ultralytics/engine/predictor. py, and got the results as follow:

1/5: rtsp://admin:xxx... Success ✅ (inf frames of shape 2560x1440 at 25.00 FPS)
2/5: rtsp://admin:xxx... Success ✅ (inf frames of shape 2560x1440 at 25.00 FPS)
3/5: rtsp://admin:xxx.. Success ✅ (inf frames of shape 2560x1440 at 25.00 FPS)
4/5: rtsp://admin:xxx... Success ✅ (inf frames of shape 2560x1440 at 25.00 FPS)
5/5: rtsp://admin:xxx... Success ✅ (inf frames of shape 2560x1440 at 25.00 FPS)

torch.Size([5, 3, 384, 640]) before inference

The detail of the rtsp ip is hidden for privacy. It shows me that five streams are integrated into a batch. This is quite different from what you said. Is my verification method incorrect?

tofulim commented 1 month ago

@glenn-jocher if @shengyu27 's right it would be huge confusion

glenn-jocher commented 1 month ago

Thank you for your observation. The track method does indeed support batch processing of multiple streams, as indicated by your output showing the integrated batch shape. This suggests that the method can handle multiple RTSP streams simultaneously, contrary to earlier assumptions. If you encounter any inconsistencies or issues, please ensure you are using the latest version of the package and provide a reproducible example for further investigation.

john09282922 commented 4 weeks ago

Thanks for lots of intersted shares. I would like to use detection, not tracking model. Just change model.track to model.detect? Also I want to use several models concurrently. utilizing multiple-RSTP streaming. How can I do?

glenn-jocher commented 4 weeks ago

Yes, you can change model.track to model.detect for detection tasks. To use several models concurrently with multiple RTSP streams, consider using threading to handle each stream with its own model instance.

john09282922 commented 4 weeks ago

Yes, you can change model.track to model.detect for detection tasks. To use several models concurrently with multiple RTSP streams, consider using threading to handle each stream with its own model instance.

Thanks for replying me for this.

What I mean is that I would like to add more own models to threading method. Can you guide me to set up more models?

Also, triton server is cloud based model? Or possbile to use it without internet service

glenn-jocher commented 4 weeks ago

You can add more models by creating separate threads for each model instance, handling different RTSP streams. Triton Inference Server can be used locally without internet service; it doesn't have to be cloud-based.

john09282922 commented 4 weeks ago

Can you give me a help code to use multiple RTSP stream and Multi-models? it will be better to understand what you mean.

Thanks, Jungmin

glenn-jocher commented 4 weeks ago

Certainly, Jungmin. You can use Python's threading to handle multiple RTSP streams with different models. Here's a minimal example:

import threading
from ultralytics import YOLO

def process_stream(rtsp_url, model_path):
    model = YOLO(model_path)
    # Add your stream processing logic here

streams = ['rtsp://stream1', 'rtsp://stream2']
models = ['model1.pt', 'model2.pt']

threads = [threading.Thread(target=process_stream, args=(stream, model)) for stream, model in zip(streams, models)]

for thread in threads:
    thread.start()

for thread in threads:
    thread.join()

This sets up separate threads for each stream and model. Adjust the logic inside process_stream as needed.

john09282922 commented 4 weeks ago

스크린샷 2024-09-07 231629 스크린샷 2024-09-07 231605

It is only working on one model and one stream. Can you tell me how to process multi-stream and multi-model?

glenn-jocher commented 3 weeks ago

To process multiple streams with multiple models, use threading to create separate threads for each stream and model combination. This allows concurrent processing of each stream with its respective model.

john09282922 commented 3 weeks ago

To process multiple streams with multiple models, use threading to create separate threads for each stream and model combination. This allows concurrent processing of each stream with its respective model.

Thanks, but I don't understand very well. Can you give me some example code?

glenn-jocher commented 3 weeks ago

Certainly! Here's a simple example to handle multiple streams with multiple models using threading:

import threading
from ultralytics import YOLO

def process_stream(rtsp_url, model_path):
    model = YOLO(model_path)
    # Add your stream processing logic here

streams = ['rtsp://stream1', 'rtsp://stream2']
models = ['model1.pt', 'model2.pt']

threads = [threading.Thread(target=process_stream, args=(stream, model)) for stream, model in zip(streams, models)]

for thread in threads:
    thread.start()

for thread in threads:
    thread.join()

This sets up separate threads for each stream and model. Adjust the logic inside process_stream as needed.

john09282922 commented 3 weeks ago

Above my code, I already followed your simple code, but results only show one stream and one model. Can you fix my code to use multiple models and multiple stream?

glenn-jocher commented 3 weeks ago

Certainly! Ensure each stream has its own thread and model instance. Here's a refined example:

import threading
from ultralytics import YOLO

def process_stream(rtsp_url, model_path):
    model = YOLO(model_path)
    # Add your stream processing logic here

streams = ['rtsp://stream1', 'rtsp://stream2']
models = ['model1.pt', 'model2.pt']

threads = [threading.Thread(target=process_stream, args=(stream, model)) for stream, model in zip(streams, models)]

for thread in threads:
    thread.start()

for thread in threads:
    thread.join()

Ensure each thread processes its respective stream and model. Adjust the logic inside process_stream as needed.