[X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
By using Intel® Extension for PyTorch , we can take advantages of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs through a few lines of changes. That can speed up the inference.
Proposed workflow
This should go background, so the user may only notice the speed is faster.
Thank you for the suggestion. OpenVINO backend already takes advantage of AVX512 and AMX and provides the best optimizations for the target Intel® device. At present, we find no necessity to change the backend.
Is there an existing issue for this?
What would your feature do ?
By using Intel® Extension for PyTorch , we can take advantages of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs through a few lines of changes. That can speed up the inference.
Proposed workflow
This should go background, so the user may only notice the speed is faster.
Additional information
No response