Open MRFiruzabadi opened 5 months ago
Hey there, I have tried running multiple models on the DPU using PYNQ. Well can models concurrently execute yes you can. but it depends on the size of the model. I tried running 2 models and it worked. But when i tried models out of which the third one is significantly larger than the other 2 , it was causing segmentation issue.
FYI , i am using PYNQ library on ubuntu. So to smoothly run my app i had to load the model before inference and delete the instance after the inference everytime i dont find any docs or other references regarding this . If you have found something within this time please share here
Hello,
I need help with Vitis AI on my KR260 board. I've compiled a model for the "DPUCZDX8G_ISA1_B4096" DPU. I have two questions:
Resource Usage: How can I ascertain the extent of DPU resources utilized by my model during operation? (While I understand the baseline resource consumption of the DPU itself on the programmable logic (e.g., LUT, BRAM DSP,...). I want to know how much DPU is occupied by the compiled model.)
Running multiple models: Is it feasible to execute multiple models concurrently on a single KR260 board? I'm curious if it's possible to allocate one DPU to serve multiple models or alternatively, distribute models across various DPUs within the board.
I would greatly appreciate any insights, recommendations, or documentation that addresses these queries.
Thank you in advance for your support.