Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.46k stars 630 forks source link

Running unsupported subgraphs on CPU ? #907

Closed mehdi-nait closed 2 years ago

mehdi-nait commented 2 years ago

Is there a way to run subgraphs that cannot be run on DPU on CPU cores ? I'm using a relatively big model (see picture).

Thanks!

default

qianglin-xlnx commented 2 years ago

HI @mehdi-nait You can use Graph_runner APIs to run the model. You can refer to :https://github.com/Xilinx/Vitis-AI/blob/master/examples/Vitis-AI-Library/samples/graph_runner/resnet_v1_50_tf_graph_runner/resnet_v1_50_tf_graph_runner.cpp

Mookel commented 2 years ago

Hi, @mehdi-nait, besides graph_runner mentioned by @qianglin-xlnx , In vai2.5, we also proposed a new feature called WeGO. It's a Vitis-AI in-framework inference solution, which works as frameworks' plugin to integrate Vitis-AI toolchain into diverse frameworks, including TensorFlow 1.x, TensorFlow 2.x and PyTorch. WeGO can enable graph partitioning and heterogeneous execution automatically: for operators supported by Xilinx DPUs(Deep Learning Units), WeGO will dispatch them to Vitis-AI native runtime on DPU execution, while for operators not supported by Vitis-AI, WeGO will dispatch them to native frameworks on CPU execution.

You can refer to WeGO examples https://github.com/Xilinx/Vitis-AI/tree/master/examples/WeGO for more details. Thanks very much.