Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.5k stars 635 forks source link

Faster RCNN #1318

Open zijian98 opened 1 year ago

zijian98 commented 1 year ago

Hi, may I know if two-stage detector model such as Faster RCNN trained with PyTorch is supported currently in Vitis-AI 3.0? If not, then are there any plans for it?

Thank You!

quentonh commented 1 year ago

@zijian98 You may be able to deploy Faster RCNN leveraging Vitis AI 3.0 in combination with ONNX Runtime, but to my knowledge, no, you cannot deploy the model on the DPU alone. Faster-RCNN and similar two-stage networks are not supported by the Edge DPU today, though I believe that we historically deployed it on Alveo (which benefits from the ability to leverage the x86 CPU to process portions of the graph).

Depending on the use case, you might do well to consider an alternative to Faster-RCNN. Specifically, we have deployed the RefineDet network on the DPU and the model is available in our ModelZoo today. RefineDet has many of the advantages of two-stage models such as Faster-RCNN, but is more efficient for deployment.

https://medium.com/@jonathan_hui/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359

https://arxiv.org/pdf/1711.06897.pdf

--Quenton