Open jvandegriff opened 7 years ago
Even floats contain more digits than are typically known, so it's good to have a way to quote a limit on the precision. I like the argument that format specifiers guide precision (and magnitude) limits, and they also are beneficial when formatting. Perhaps a limited set could be suggested for an optional field, like %9.3f and %5d, or %.2e when the data are typically logarithmic. Times don't really fit into this though, and it might be better to simply quote the resolution limit as a string ("5 seconds") with suggested forms.
Another thing I'll mention again is that when compression is enabled on the server, it's really not wasteful to send over doubles when resolution is limited. I've done an experiment where you use ASCII and binary formats for the same data, then compress both, and you'll see they have about the same length.
Nand was wondering if there was a way for a server to let clients know what precision is available, or at least what is a reasonable precision to expect. The time values in some formats have very high precision which is often overkill. How can HAPI avoid this same time precision overkill? The same problem exists with time values and numeric data values.
We talked about this on a telecon on 2017-09-11. The consensus was not to add any precision indicators. For time values, the length field indicates a kind of precision. For numeric values, it is up to the server to output a reasonable number of digits for the data it is presenting. For binary data, everything is double precision or 32 bit integers, so there could be some waste there, but we have basically decided to put up with the waste for the sake of simplicity.