Open edwardhartnett opened 2 years ago
I know that some of the upcoming HPC systems support 16-bit floating point. For example, the MI250X shows FP16 performance numbers (https://www.amd.com/en/products/server-accelerators/instinct-mi250x)
In the finite element / finite volume I/O area that I am primarily involved in, we have not had requests for storing this data yet and I think it is primarily being used in the machine learning areas (e.g. PyTorch)
A partial solution to this is to create a float32 <-> float16 filter so that the data is stored on disk as float16.
float16 is supported in Fortran, not in C. This is related to #2650.
I am attending the EGU this week and watching some presentations on exascale computing.
One trend has been to stop using NC_DOUBLE (which is very typical in models), and use NC_FLOAT instead. (This is being done with the UFS at NOAA with great results.)
However, the trend can be carried even further, and I just saw a presentation about the speed of 16-bit floating point numbers. They are at least twice as fast as 32-bit, and in some cases even faster. So the question is: how would such numbers be stored?
(In terms of implementation, it is possible in HDF5 to define a 16-bit floating point type.)
I will therefore be the first to raise the question: should netCDF support a 16-bit floating point type?
@gsjaardema any thoughts? Have you heard of this approach yet?