Open amitdo opened 3 years ago
That requires access to test machines which support those instructions. Up to now I don't have any server with AVX512.
AVX_VNNI - (upcoming Alder Lake) - 256bit/128bit
This will be available soon. Maybe we could ask users to help testing this feature a few months from the launch.
@stweil, can I add AVX_VNNI
(256 bit) detection as a first step?
Sure, but who has such hardware to test it?
There are tens of millions of people with Intel's Alder Lake, some of them are Tesseract users. We can ask in the forum to test the detection (and later the intdotproductvnni). Hopefully, we will find at least one person that has this CPU and is willing to help.
@amitdo, I just noticed that the notebook which I used for AVX512F also has AVX512VNNI. :-) Do you already have code for the detection? If not, I can add it ~(just have to find the right documentation which bit in cpuid is used)~. Wikipedia has the required documentation.
Go ahead!
Could you do the AVX_VNNI
detection too? You are more familiar with the detection code than me.
Since most of our files already have:
(C) Copyright \<year>, Google Inc.
I think we can look at other Google projects with the same license as ours, and use parts of the code if we need it.
Detection is now implemented by commit 0daf18c2028e5217cd996522816c3dd2ec1a4198.
I see that you check that avx/avx2 is supported by the OS. Do you also check somewhere that avx512 is supported by the OS?
No, currently only the hardware capabilities are checked for avx512. Up to now nobody complained, so maybe AVX512F was only used on operating systems which support it. I'll add a check for OS support. Thank you for the hint!
I will try to implement intsimdmatrixavx512vnni.cpp
.
Great. Maybe you can use https://github.com/stweil/tesseract/tree/avx512-vnni (which adds the framework, but simply copied the existing AVX2 code) as a starting point.
Yes, thank you. Please open a draft PR with that code. I'll push the needed changes to your PR.
See PR #3894.
Stefan,
There are two ways to implement intsimdmatrixavx512vnni.cpp
:
1) The 'right and complete way', which is also the 'complex way':
a) First convert intsimdmatrixavx2.cpp
to intsimdmatrixavx512.cpp
.
b) Then convert intsimdmatrixavx512.cpp
to intsimdmatrixavx512vnni.cpp
.
2) The 'simple way', which is incomplete but still expected to work fine and to be much faster than intsimdmatrixavx2.cpp
:
AVX512-VNNI supports 256-bit vector operations, not just 512-bit vector operations. Since AVX2 uses 256-bit vectors, I believe only a few changes are needed to convert intsimdmatrixavx2.cpp
to intsimdmatrixavx512vnni.cpp
which will use 256-bits vectors instead of 512-bit vectors.
I want to implement the second way in PR #3894. We can still implement the first way later.
What do you think about my suggestion?
intsimdmatrixavx512vnni.cpp
?
intsimdmatrixavx512vnni.cpp
?
Fixed :-)
There are two variants:
VNNI replaces 3 simd instructions with one instruction.
It seems that we can use it inside
MultiplyGroup()
.https://software.intel.com/content/www/us/en/develop/articles/intel-advanced-vector-extensions-512-intel-avx-512-new-vector-neural-network-instruction.html