Open MrSherish opened 1 year ago
I see four options:
zfp_promote_int16_to_int32()
and zfp_demote_int32_to_int16()
utility functions. This, however, requires making copies of the data. If you're OK with this approach, then do use these conversion functions rather than just cast your data. See this FAQ.And fp16 support would be pretty cool too. Especially if it also supports bfloat16.
See the third bullet above. Currently, FP16 and bfloat16 can be handled similarly to zfp_promote_int16_to_int32
, but with the user performing the conversions (e.g., to/from float
). We do eventually want to add full support, but as mentioned above, we'd need to add hundreds of tests and deal with the difficulties of portably converting to/from these types that typically don't have native support, including how to deal with rounding, subnormals, NaNs, etc. This is potentially a lot of work, especially when you consider the multiple back-ends (serial, OpenMP, CUDA, HIP, SYCL), language bindings (C, C++, Python, Fortran), multiple array dimensionalities (1D-4D), the conversion functions themselves, the actual (de)compression pipeline, plus documentation, tests, and examples for the Cartesian product of all these variants. This is a huge undertaking that will be simplified considerably when we transition to a single (de)compression pipeline and to a common implementation across back-ends. Unfortunately, that work will itself take some time.
Is it possible to introduce 16-bit signed integer support to zfp? If so, how hard would it be and where one should start?