Open DogukanAltay opened 4 years ago
I have never tried it like with two models. One simple solution would be to create a subprocess with the requested model (or even two) and use queues to communicate with these. It will also be much more efficient as both models csn run together and in parallel. I am not actively maintaining this repo, i am actually using the mobilnet ssd implementation aastal wrote.
On Fri, Jan 10, 2020, 00:02 DogukanAltay notifications@github.com wrote:
Basically I am trying to load 2 different tiny-yolov3 models to memory and do inference depending on the situation. However, when I try to load 2 different yolo models in the same python script, library messes up the memory locations and fails.
Any help would be appreciated. I will post if I come up with a solution
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mosheliv/tensortrt-yolo-python-api/issues/7?email_source=notifications&email_token=AC7IWCY7ZEO2JUODM6D6QLDQ4375HA5CNFSM4KEWOGCKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IFAYYIQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC7IWC4CZWTYJTK3DYJSFALQ4375HANCNFSM4KEWOGCA .
Suggested approach is the logical one however for my requirements, sugessted approach is not applicable. Thanks for the work and help.
Basically I am trying to load 2 different tiny-yolov3 models to memory and do inference depending on the situation. However, when I try to load 2 different yolo models in the same python script, library messes up the memory locations and fails.
Any help would be appreciated. I will post if I come up with a solution