Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.49k stars 634 forks source link

Dilated convolution support #174

Closed Laadr closed 4 years ago

Laadr commented 4 years ago

Hi,

When compiling my network, I get the following error: [VAI_C][Error] Dilation > 1 is not supported for DepthWise or Deconvolution, current layer is [fc6_separable_conv2d_depthwise]. [VAI_C][Error] Parsing tensorflow model failed.

However, in https://www.xilinx.com/support/documentation/ip_documentation/dpu/v3_2/pg338-dpu.pdf (page 21), it states that dilations are supported in depthwise convolution layers given that the condition "dilation input_channel ≤ 256 channel_parallel &&stride_w == 1 && stride_h == 1" is respected.

I precise that I'm using the version of vitis ai available on the main branch last week.

Could you confirm that it is supported? If it is indeed supported, do you have an example of network using such layer in your Vitis AI model zoo?

Thanks in advance for your help

Mookel commented 4 years ago

Hi, @Laadr The dilation > 1 for DepthWise is not supported by our compiler yet. In the condition "dilation input_channel ≤ 256 channel_parallel &&stride_w == 1 && stride_h == 1" you mentioned, the dilation must be 1.

Laadr commented 4 years ago

Thanks for your reply. In this case, I hope this feature will be available soon.

qianglin-xlnx commented 4 years ago

Thanks for your reply. In this case, I hope this feature will be available soon.

Yes. It would be available in vai1.3.