NVIDIA-AI-IOT / deepstream_parallel_inference_app

A project demonstrating how to use nvmetamux to run multiple models in parallel.
87 stars 21 forks source link

Is there a way of getting parallel inferences to appear on the same tile? #6

Open lolitsjoey opened 11 months ago

lolitsjoey commented 11 months ago

Ideally I want three models ran in parallel but their predictions all visible on a 1x1 tiled display.

Not too fussed about what the models actually are at the moment

Thank You

UkuKert commented 7 months ago

I have tried it with 2 models on 1 tile. The pipeline works for about 2 minutes and then it freezes with no error.