I'm sorry if I asked the question in the wrong place. As I know so far, this project is used for Neural Network accelerators. However, I don't know where to ask, so I have to ask here.
When apply OpenVX Vision functions for feature tracking, which are : vxGaussianPyramidNode, vxHarrisCornersNode, vxFastCornersNode and vxOpticalFlowPyrLKNode, I found that the computation time of these functions are huge.
Below is computation time of these functions, both on i.MX 8MP EVK and laptop with Nvidia 1050Ti. The input is same for both, a grayscale image with 752x480 resolution (Euroc dataset).
Testing on i.MX 8MP EVK
vxGaussianPyramidNode : 18.4291 ms
vxHarrisCornersNode : 712.998 ms
vxFastCornersNode : 27.8731 ms
vxOpticalFlowPyrLKNode : 3.22289 ms
Testing on laptop with Nvidia 1050Ti (VisionWorks)
vxGaussianPyramidNode : 0.219178 ms
vxHarrisCornersNode : 0.178945 ms
vxFastCornersNode : 0.184957 ms
vxOpticalFlowPyrLKNode : 0.195883 ms
As you can see, compare to Nvidia 1050 Ti, the computation time on i.MX 8MP EVK are 100-150 time slower ?
I used TfLite-vx-delegate to inference DL models, and it's very good at speed, say just 2 times slower than 1050 Ti. But with traditional vision functions, it's huge difference, 100-150 times slower.
As I know so far, I can use EVIS to re-implement vxGaussianPyramidNode , vxHarrisCornersNode and vxFastCornersNode to get faster speed. But I don't have any experience with EVIS, so I will be difficult for me to speed up those OpenVX Vision functions.
Could you give me some advice to speed up those OpenVX Vision functions ? I really appreciate that.
Hi Supporters,
I'm sorry if I asked the question in the wrong place. As I know so far, this project is used for Neural Network accelerators. However, I don't know where to ask, so I have to ask here.
I'm using i.MX 8MP with Yocto version imx-yocto-bsp-5_10_52-2_1_0, which contains tim-vx_1.1.32 inside. (https://github.com/NXPmicro/tim-vx-imx)
When apply OpenVX Vision functions for feature tracking, which are : vxGaussianPyramidNode, vxHarrisCornersNode, vxFastCornersNode and vxOpticalFlowPyrLKNode, I found that the computation time of these functions are huge.
Below is computation time of these functions, both on i.MX 8MP EVK and laptop with Nvidia 1050Ti. The input is same for both, a grayscale image with 752x480 resolution (Euroc dataset).
Testing on i.MX 8MP EVK
Testing on laptop with Nvidia 1050Ti (VisionWorks)
As you can see, compare to Nvidia 1050 Ti, the computation time on i.MX 8MP EVK are 100-150 time slower ?
I used TfLite-vx-delegate to inference DL models, and it's very good at speed, say just 2 times slower than 1050 Ti. But with traditional vision functions, it's huge difference, 100-150 times slower.
As I know so far, I can use EVIS to re-implement vxGaussianPyramidNode , vxHarrisCornersNode and vxFastCornersNode to get faster speed. But I don't have any experience with EVIS, so I will be difficult for me to speed up those OpenVX Vision functions.
Could you give me some advice to speed up those OpenVX Vision functions ? I really appreciate that.
Many thanks in advance !.
Best regards, Hiep