HowardHinnant / date

A date and time library based on the C++11/14/17 <chrono> header
Other
3.16k stars 685 forks source link

Formatted output of sys_time<duration<double, ratio<1>>> is limited to 6 decimal digits #612

Closed ecorm closed 4 years ago

ecorm commented 4 years ago

When using auto t = sys_time<duration<double, ratio<1>>>(0.1234567), outputting it as ISO8601 via format("%FT%TZ", t) results in only 6 decimal digits being displayed.

I've tracked it to this line in the library: https://github.com/HowardHinnant/date/blob/7848566815ae45ed4bfe26e738476e44451b109e/include/date/date.h#L3774

where the stream's precision is never set, so it defaults to 6. Is this the intended behavior?

According to the spec for the %S specifier:

If the precision of the input cannot be exactly represented with seconds, then the format is a decimal floating-point number with a fixed format and a precision matching that of the precision of the input (or to a microseconds precision if the conversion to floating-point decimal seconds cannot be made within 18 fractional digits).

It makes no mention of integral vs floating point for the representation. In the case of floating point, I would interpret "precision matching that of the precision of the input" to be 17 decimal places in the case of double.

ecorm commented 4 years ago

I missed this part of the spec:

Giving a precision specification in the chrono-format-spec is valid only for std​::​chrono​::​duration types where the representation type Rep is a floating-point type.

However, it's not obvious from the description of %S that a precision can be (or has to be) specified with a precision specifier for floating point representations. Having the precision be automatic and non-configurable for integral reps, and non-automatic but configurable for floating point reps is a bit confusing from a user perspective, to be honest. It also makes it impossible to use the same %S specifier with both integral/floating reps when a precision is specified.

I'm guessing that the ability to specify precision using the date library has be done on a stream that is passed to date::to_stream. I'll give this a try.

ecorm commented 4 years ago

Yes, setting the precision on a stringstream before passing it to date::to_stream indeed works when the representation is floating point. This seems to be the way to emulate the C++20 precision modifier for the %S specifier.

I now have the unenviable task of figuring out the actual precision for floating point durations when the ratio is not 1:1. :smile: I see date has some compile-time log10 and pow10 stuff that I might be able to leverage.

Sorry for bothering you with this, but I hope at least it helps others who stumble on the same problem I had.

The inability to specify precision for integral reps is already covered in issue #562 , so I'm closing.

UPDATE 2021-08-17: Setting the precision on the stringstream no longer works. See below.

apollo13 commented 3 years ago

@ecorm Do you have some more details, I am trying something like this:

   stringstream s;
   s.precision(5);
   to_stream(s, "%S", utc_time_);

but it still formats the whole nanosecond range precision from utc_time_.

ecorm commented 3 years ago

@apollo13 What type is utc_time_?

If it has integral representation, and the output precision is always known (i.e. cannot change at runtime), it's probably best to cast to a type with the precision you want:

stringstream s;
auto coarser = floor<std::chrono::milliseconds>(utc_time_);
to_stream(s, "%S", coarser);

where you can substitute floor with whatever rounding operation you want: duration_cast (truncates), round, ceil. (I wish the rounding modes where tag types instead of separate functions.)

If the precision is not milliseconds, microseconds, etc, you can do this instead:

stringstream s;
using rep = typename decltype(utc_time_)::rep;
auto coarser = floor<duration<rep, std::ratio<1, 10000>>>(utc_time_);
to_stream(s, "%S", coarser);

where the above example should format to 1/10000th second precision.

If the precision is only known at runtime, you have to convert it to floating point seconds. Only then will s.precision(x) have an effect. (This no longer works. See my posts below.)

stringstream s;
auto as_real_seconds = duration_cast<duration<long double, std::ratio<1>>>(utc_time_);
s.precision(5);
to_stream(s, "%S", as_real_seconds);

I haven't tested any of the above. Let me know if it doesn't work.

apollo13 commented 3 years ago

@ecorm

If the precision is only known at runtime, you have to convert it to floating point seconds.

Thank you, this is the part that I missed!

ecorm commented 3 years ago

@apollo13 I don't think the s.setf(std::ios::fixed) was really necessary. It's probably a remnant in my code where I was outputting the seconds manually myself in an attempt to control the precision.

ecorm commented 3 years ago

@apollo13 I'm not sure my workaround of setting the stream precision still works after this commit (see line 4003). I'll update my version of the date library and will test.

apollo13 commented 3 years ago

@ecorm No need to do that for me. I switched to explicit casts since I do not need to support arbitrary precision at runtime. I only need second/micro/milli/nano, so I just casted accordingly. But I learned a lot due to your comments, so thank you for that as well!

ecorm commented 3 years ago

I can confirm that my workaround of setting the stream's precision no longer works. I needed to check this anyway because I was making use of it.

When formatting floating point time types, to_stream now uses width+6 as the seconds precision, where width would be the precision if the time type were integral.

I'm considering just living with to_stream's hard-coded seconds behaviour and require users of my serialization library to pass an intermediary integral time type that specifies the desired output resolution.