Closed qntm closed 2 years ago
It might be that what we really need to maintain internally is not a Rat
(a ratio of BigInts) but some kind of symbolic equation, which changes and allows operations to be applied. And, Segment
would cease to be a purely linear thing and return more of those symbolic equation things. We could then use that equation to generate as many digits as necessary at output time - be they nanoseconds, milliseconds, or whatever. It would have to be a finite number of digits, of course.
Sure, there are better "precise" computable numbers, such as algebraic numbers represented by a polynomial and a segment that isolates a single root, with polynomial coefficients and points represented as rational numbers. The problem is that their implementation is already quite complex, involving polynomial-valued matrices, and operations involving sin/cos would make it much worse, bringing multivariate polynomial systems and Groebner bases into the picture.
I don't think there is a reason to dive into this for special smearing models, unless somebody comes up with any reasoning behind them. Google's choice of linear smearing seems to be a reliable solution. Not that it really matters on frontend...
My feeling of late is that Meta is irrelevant.
There are various ways of modelling the relationship between TAI and UTC over the course of a leap second, and
t-a-i
offers four of these. For real-world purposes, a smear seems to be the most popular of these options, andt-a-i
offers one of these, a strictly linear 24-Unix-hour smear from 12:00 to 12:00 across the discontinuity, favoured by Google. However, there are other possible smears. Meta's 25 July 2022 essay advocating for the abolition of leap seconds - whose topic I'm not addressing here - mentions that they use a 17-hour sinusoidal smear beginning at the moment of discontinuity. Whether or not Meta follows through on their ambitions, this is the model they use internally at the time of writing, so it would be nice to add support for it tot-a-i
.Right now, implementing this would be annoyingly difficult.
t-a-i
internally uses precise ratios of BigInts in order to ensure precise output - in fact, currently it has "secret" internal APIs which return those exact ratios, with no truncation, for perfect accuracy. The sin function isn't amenable to this approach - in most cases, the result can't be precise. So, what to do? How to modify the API? What level of precision is good enough? Bit of a conundrum. (As an interesting side note, Google rejected sinusoidal smears in part because a linear smear was "simpler [and] easier to calculate".)The linked article also mentions some other smearing models:
Google's article also mentions several other smears which I might implement someday, but they don't seem to be in active use, so I consider them a lower priority.