OpenMathLib / OpenBLAS

OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
http://www.openblas.net
BSD 3-Clause "New" or "Revised" License
6.38k stars 1.5k forks source link

BFloat16 data type naming #2767

Closed Guobing-Chen closed 4 years ago

Guobing-Chen commented 4 years ago

In current code base, the BFloat16 data type is named as shxxxx (e.x.: shgemm), and related build flag as BUILD_HALF. Seems we simply take BF16 as half precision of float. This is not true as to the standard definition of IEEE. Half precsion should be FP16, which is different in format and content as compared with BFloat16. As for OpenBLAS, we may both support BFloat16 and and FP16 as valuable for different domains -- BFloat16 mostly valuable for Deep Learning and Machine Learning, while FP16 more valuable for traditional scientific computation and telecom processing.

Suggest to change the naming of data type and build flag to be bxxxx (e.x.: bgemm) and BUILD_BF16. While we can leave shxxxx for the real half precision data type -- FP16, or even make it as hxxxx (e.x.: hgemm). Whatever, use shxxxx and BUILD_HALF is quite confusing to community, every other math libs like Eigen/oneDNN (previously mkldnn) use keywords like bf or bf16 or so.

I can submit PR to make this change if we are OK on this.

martin-frbg commented 4 years ago

At least I haven taken the sudden quiet to mean there is no objection to the renaming, and I still plan to do it before the 0.3.11 release. (Currently stuck on the build system rearrangements necessary to allow building only the parts of the libary for a particular type, but will put that on hold if necessary so that we do not get two releases with the "wrong" API).

No objections from me regarding naming of the integer operations - there are obviously too many types to represent them with sensible single-letter prefixes. (Though perhaps IFF there is one particularly popular type, we could create an old-style alias for it, thinking IGEMM and possibly UGEMM for its unsigned counterpart)

conradsnicta commented 4 years ago

(Though perhaps IFF there is one particularly popular type, we could create an old-style alias for it, thinking IGEMM and possibly UGEMM for its unsigned counterpart)

I'd argue against that, as there's no concrete size of an integer. On 64 bit platforms it could be either 32 bits or 64 bits. (eg. see the mess with the "long int" and "long long" types in C++11: https://en.wikipedia.org/wiki/C++11#Type_long_long_int)

One possible approach is to forcefully assume that the integer size is 32 bits on 32 bit platforms, and 64 bit on 64 bit platforms. However, this can lead to confusion and is inconsistent with the behaviour of the prefix in the sgemm/dgemm/cgemm/zgemm cases (ie. s = always 32 bits, d = always 64 bits, etc).

martin-frbg commented 4 years ago

That is why I suggested to do this only if there is some widely used type that "could" be considered the "typical" integer for current deep learning algorithms, e.g. if "everybody" was already gravitating towards s32 or u16 we could give that one the IGEMM alias. Of course this would only create confusion when "everybody" uses what best suits their particular platform and problem. I am not that familar with ML.

Djip007 commented 3 years ago

Juste to note ArmV8.2 have fp16 Neon compute support... If someone want to test the Cortex-A55 support it ... can be found on https://www.hardkernel.com/shop/odroid-c4/...