Open daurnimator opened 5 years ago
Also .NET 5 will have Half types
As a type naming proposal, perhaps f16_7
, so use the mantissa/fraction number of bits? Rationale: Less precision -> lower number.
Short name | Long name | Description |
---|---|---|
f16 | f16_10 | IEEE half-precision 16-bit float / .NET Half type |
f32 | f32_23 | IEEE 754 single-precision 32-bit float |
f64 | f64_52 | IEEE 754 double-precision 64-bit float |
(none?) | f16_7 | bfloat16 |
? | f19_10 | NVidia's TensorFloat |
? | f24_16 | AMD's fp24 format |
We could do what we do with integer types and allow the creation of arbitrary exponent/mantissa bitcount float types on demand.
Apparently ARM Neoverse v1 will be getting BFLOAT16 support: https://fuse.wikichip.org/news/4564/arm-updates-its-neoverse-roadmap-new-bfloat16-sve-support/
If you do, also add BFLOAT19 AKA TF32. If we are following rust naming convention that would be f19b
.
LLVM 11 added support for bfloat16: https://llvm.org/docs/LangRef.html#floating-point-types
BFLOAT16 is a new floating-point format. It's a 16-bit floating point format with an 8 bit exponent and 7 bit mantissa (vs 5 bit exponent, 11 bit mantissa of a half-precision float which is currently
f16
) designed for deep learning.Selected excerpts:
f16b
.References:
As a more general issue: how should we add new numeric types going forward? e.g. Unum. With zig not supporting operator overloading, such types would have to be provided by the core for ergonomic use.