Closed romange closed 3 years ago
Yes, sure! Are you asking because of a missing license? (I'll add one soon...)
Yes, exactly :) :+1:
The funny thing is that I started looking into the field of double compression just few days ago (and learning about IEEE-754 in general). Apparently doubles are not very compressible because of their normalized form - fraction part usually fills most bits due to normalization even for "short" decimal doubles.
Then I had this idea to opportunistically check if decimal conversion yields shorter (possible fixed precision) representation and then compress the induced (shorter) integers. The motivation is that in practice many doubles are coming from humans and they are in fact "short".
I started learning this brilliant Grisu algorithm in double conversion repo but its interface does not fit my needs and it has lots of unneeded printing code. Imagine my surprise when I discovered your code written just a week ago! It was a gift from sky :100:
Thanks!! But really, I pretty much just added some comments while trying to understand the code (and the paper) from Florian Loitsch. And hopefully I got them right :-)
@abolz Do you know why "-73.929589" translates by Grisu2 to "-73.92958900000001" ?
This is probably due to the imprecision introduced when scaling the input number with a power-of-ten.
Actually, numbers in the range [2^(q-1) 2^alpha, (2^q - 1) 2^gamma]
don't require scaling. For the current implementation this range is ~= [8, 4+e9]
, so your number could be shorter. This might be addressed by using a different table of cached powers or - easier - by handling the exponent range [alpha, gamma]
in the Grisu2 method without scaling.
I am not sure I follow. 73.929589
is in range [8, 4+e9]
so it should not be scaled, right?
Yes, exactly. Typo: The range is [8, 4e+9]
ok so it's not scaled how come it introduces the imprecision? or how to choose better alpha
, gamma
values?
I was just guessing about the imprecision being introduced by scaling and did not check the numbers :/
Without scaling, the situation is something like this: The floating-point number which is closest to A = 73.929589
is
v = 5202332335500843 * 2^-46 = 73.9295890000000071040631155483424663543701171875
and this is the number the algorithm will approximate. Now the neighbors and the boundaries of v
are
A
----|----------------+---*------------|----------------+----------------|----
v- m- v m+ v+
v- = 5202332335500842 * 2^-46 = 73.929588999999992893208400346338748931884765625
m- = 10404664671001685 * 2^-47 = 73.92958899999999999863575794734060764312744140625
v = 5202332335500843 * 2^-46 = 73.9295890000000071040631155483424663543701171875
m+ = 10404664671001687 * 2^-47 = 73.92958900000001420949047314934432506561279296875
v+ = 5202332335500844 * 2^-46 = 73.92958900000002131491783075034618377685546875
and any decimal number in the range (m-,m+)
(bounds exclusive) will be rounded to v
when input, regardless of the rounding mode. To represent these boundaries some extra precision is required. The algorithm then normalizes the boundaries. That is effectively representing the boundaries with precision q=64
instead of p=53
. This results in the normalized boundaries
w- = 10654376623105725440 * 2^-57 = m-
w+ = 10654376623105727488 * 2^-57 = m+
In order to produce a decimal floating-point number which is guaranteed to lie in (w-,w+)
, first add 1 ulp resp. subtract 1 ulp from these numbers:
M- = w- + 1 ulp
= 10654376623105725441 * 2^-57
= 73.929589000000000005574651851247836020775139331817626953125
M+ = w+ - 1 ulp
= 10654376623105727487 * 2^-57
= 73.929589000000014202551579245437096687965095043182373046875
The algorithm now generates a decimal number in the range [M-,M+]
(bounds included). But 73.929589
does not lie in this range. The shortest decimal number in this range is actually B = 73.92958900000001
and this is the number which is generated.
<----- 10^-14 -------->
A B
----|----------------+---*-|----------|--------*-|-----+----------------|----
v- m- M- v M+ m+ v+
w- w+
The imprecision here comes from the fact that the precision q
of the diy_fp
s is not enough to ensure M- <= 73.929589
. This requires some more precision. (q = 67
is sufficient.) A larger q
means that M-
is closer to w-
and M+
is closer to w+
.
Thank you very much for your answer! I am not really an expert in the details of the algorithm so your answer will help me to understand the process better.
Meanwhile, i have a small comment about Grisu2Round
. It has the following condition:
rest + ten_k < dist || dist - rest > rest + ten_k - dist
. But if you rearrange the terms in the second part you get:
2*rest +ten_k < 2*dist
. My claim is that if rest + ten_k < dist
is true then
2*rest +ten_k < 2*dist
is also true, thus, rest + ten_k < dist
can be removed.
Again, I do not understand completely every aspect of the algorithm but given that the original condition is correct it can be simplified.
Thank you very much for your answer! I am not really an expert in the details of the algorithm so your answer will help me to understand the process better.
And answering helps me understanding the process better, too! :-) So let me explain this in some more detail...
TL;DR: The tests are written in this order to avoid overflow in unsigned integer arithmetic.
At the start of Grisu2Round:
<-------------------------------------- delta ---->
<---- dist -------------->
<---- rest ---->
----[------------------------|---------*--------------]----
M- v A M+
<----->
ten_k
and we want to round the buffer down (i.e. move A
to the left) if we get closer to v
. The loop looks like
while (rest < dist // 1
&& delta - rest >= ten_k // 2
&& (rest + ten_k < dist // 3
|| dist - rest > rest + ten_k - dist)) // 4
{
}
Condition 1: If rest >= dist
then A
is already to the left of v
and rounding the buffer down would bring A
away from v
. So, ensure that rest < dist
, i.e. A
lies to the right of v
.
Condition 2: If rest + ten_k > delta
, then A
would leave the interval [M-,M+]
. So ensure that rest + ten_k <= delta
. But, since all these numbers here are unsigned integers, one needs to rearrange the terms slightly to avoid the possibility of overflow in rest + ten_k
. The condition is equivalent to delta - rest >= ten_k
and delta >= rest
is a post-condition of Grisu2DigitGen
so that the subtraction does not overflow.
Condition 3: If rest + ten_k < dist
then A
gets closer to v
while still being slightly larger than v
. In this case: round down. Note that Condition 2 ensures that rest + ten_k
does not overflow. (This situation is shown in the diagram above.)
(I think this test should actually use <=
...)
Condition 4: If rest + ten_k > dist
, then A
would end up to the left of v
, say at A'
. We only want to round down if the distance from A'
to v
is shorter than the distance from v
to A
. The former is rest + ten_k - dist
and the latter is dist - rest
. Both expressions do not overflow because of the preceding conditions. (Conditions 1 is true, 3 is false.) The mathematically equivalent 2*dist > 2*rest + ten_k
on the other hand might overflow.
This rounding step is not required for the correctness of the algorithm. For random double-precision numbers in the range [0,max] about 99.93% of the time Grisu2 produces a shortest representation. Without rounding ~68.7% of the time the result is optimal, compared to ~99.9% with rounding.
Wow. It will take me some time to process all this.
*When i removed the check i mentioned the tests passed. So i assumed my logic is correct. Now i need to think how to introduce the input that overflows.
Anyway, i changed other parts of the algorithm (hotspots) and improved the performance by ~15%. I will send them to you later next week.
On Nov 24, 2017 6:39 PM, "abolz" notifications@github.com wrote:
Thank you very much for your answer! I am not really an expert in the details of the algorithm so your answer will help me to understand the process better.
And answering helps me understanding the process better, too! :-) So let me explain this in some more detail...
TL;DR: The tests are written in this order to avoid overflow in unsigned integer arithmetic.
At the start of Grisu2Round:
<-------------------------------------- delta ----> <---- dist --------------> <---- rest ---->
----[------------------------|---------*--------------]---- M- v A M+ <-----> ten_k
and we want to round the buffer down (i.e. move A to the left) if we get closer to v. The loop looks like
while (rest < dist // 1 && delta - rest >= ten_k // 2 && (rest + ten_k < dist // 3 || dist - rest > rest + ten_k - dist)) // 4 { }
Condition 1: If rest >= dist then A is already to the left of v and rounding the buffer down would bring A away from v. So, ensure that rest < dist, i.e. A lies to the right of v.
Condition 2: If rest + ten_k > delta, then A would leave the interval [M-,M+]. So ensure that rest + ten_k <= delta. But, since all these numbers here are unsigned integers, one needs to rearrange the terms slightly to avoid the possibility of overflow in rest + ten_k. The condition is equivalent to delta - rest >= ten_k and delta >= rest is a post-condition of Grisu2DigitGen so that the subtraction does not overflow.
Condition 3: If rest + ten_k < dist then A gets closer to v while still being slightly larger than v. In this case: round down. Note that Condition 2 ensures that rest + ten_k does not overflow. (This situation is shown in the diagram above.)
(I think this test should actually use <= ...)
Condition 4: If rest + ten_k > dist, then A would end up to the left of v, say at A'. We only want to round down if the distance from A' to v is shorter than the distance from v to A. The former is rest + ten_k - dist and the latter is dist - rest. Both expressions do not overflow because of the preceding conditions. (Conditions 1 is true, 3 is false.) The mathematically equivalent 2dist > 2rest + ten_k on the other hand might overflow.
This rounding step is not required for the correctness of the algorithm. For random double-precision numbers in the range [0,max] about 99.93% of the time Grisu2 produces a shortest representation. Without rounding ~68.7% of the time the result is optimal, compared to ~99.9% with rounding.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/abolz/Grisu/issues/1#issuecomment-346866064, or mute the thread https://github.com/notifications/unsubscribe-auth/ADgSiEpKKP48S0wSxLJru6xskRR3UFzxks5s5vE8gaJpZM4QjsCV .
dist-rest
is the distance from A
to v
. Mathematically speaking stepping by ten_k
will get your closer to v
iff ten_k < 2 *(dist-rest)
: if ten_k = 2 *(dist - rest)
then A'
get to be on the same distance and if it's greater then A'
will be farther away.
So, assuming that dist - rest < 2^63
you can replace (3)
and (4)
with this check. And if delta >= 2*dist
then it's possible to omit (2)
as well since if this and ten_k < 2 *(dist-rest)
stand then (2)
stands.
The question is: whether it's always correct to assume that dist - rest < 2^63
?
actually it's not needed to assume. in fact ten_k := 10^k
so:
ten_k < 2 *(dist-rest)
(10^k) / 2 < dist -rest
and dist - rest
never overflows because of the first check.
Anyway, I think this loop could be rewritten using ilog10 to compute the exact number of iterations for shifting A.
My version of Grisu2DigitGen is here: https://gist.github.com/romange/fe05544b4f630b83d8c309b61c2ffef2
it's faster than the original code because I completely removed the loop with division by 10^k
.
Instead I compute the decimal length of p1 (including the early stop criteria) and then use the same algo you used in AppendExponent
for efficient digit generation. For InitKappa
I used CountDecimalDigit32 algo like this one
void SetDigits(uint32_t value, unsigned num_digits, char* buffer) {
const char DIGITS[] =
"0001020304050607080910111213141516171819"
"2021222324252627282930313233343536373839"
"4041424344454647484950515253545556575859"
"6061626364656667686970717273747576777879"
"8081828384858687888990919293949596979899";
buffer += num_digits;
while (value >= 100) {
// Integer division is slow so do it for a group of two digits instead
// of for every digit. The idea comes from the talk by Alexandrescu
// "Three Optimization Tips for C++".
unsigned index = static_cast<unsigned>((value % 100) * 2);
value /= 100;
*--buffer = DIGITS[index + 1];
*--buffer = DIGITS[index];
}
if (value < 10) {
*--buffer = static_cast<char>('0' + value);
return;
}
unsigned index = static_cast<unsigned>(value * 2);
*--buffer = DIGITS[index + 1];
*--buffer = DIGITS[index];
}
Hi, I commented yesterday on your gist, but realized that github doesn't send any notifications for comments in gists... so I'll comment here again.
Your shortcut for p2 > delta
is really nice! And does speed up the DigitGen procedure.
I just have a question regarding the code in your cut_early
branch: As I understand it, in this branch one still needs to find the largest n
such that p1 mod 10^n <= delta_shifted < p1 mod 10^(n+1)
and then length = k-n
is the number of decimal digits which need to be generated. Now the question is: How does your code compute this value of n
? I could only come with a lower bound for n
which is CountDecimalDigits(delta_shifted)-1
.
I'm asking because if one takes this idea further, one might generate most of the digits using a routine like SetDigits
by choosing, e.g. alpha=-28
and gamma=0
. I did some initial tests and it is quite fast. But currently I'm computing the value of n
in a loop using expensive 64-bit divisions.
I started explaining this to you but then I understood that my code is probably wrong. for p1 like "10000001" and delta_shifted "1" the correct output would be "1" and not 1000000, right?
On Mon, Nov 27, 2017 at 7:17 PM, abolz notifications@github.com wrote:
Hi, I commented yesterday on your gist, but realized that github doesn't send any notifications for comments in gists... so I'll comment here again.
Your shortcut for p2 > delta is really nice! And does speed up the DigitGen procedure.
I just have a question regarding the code in your cut_early branch: As I understand it, in this branch one still needs to find the largest n such that p1 mod 10^n <= delta_shifted < p1 mod 10^(n+1) and then length = k-n is the number of decimal digits which need to be generated. Now the question is: How does your code compute this value of n? I could only come with a lower bound for n which is CountDecimalDigits(delta_shifted).
I'm asking because if one takes this idea further, one might generate most of the digits using a routine like SetDigits by choosing, e.g. alpha=-28 and gamma=0. I did some initial tests and it is quite fast. But currently I'm computing the value of n in a loop using expensive 64-bit divisions.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/abolz/Grisu/issues/1#issuecomment-347254848, or mute the thread https://github.com/notifications/unsubscribe-auth/ADgSiIJ6IHYFWH5A0Q9u2J0yBjSn_Qzbks5s6u65gaJpZM4QjsCV .
-- Best regards, Roman
or for delta_shift = 0 and p1 = 100000, it will also be suboptimal.
I would like to add those test cases to the testing suit. Do you know how I could find the double numbers that would bring me to this scenario? Maybe having f= 100000 and iterate exponent over the range of numbers ?
I haven't checked but I think some of these numbers would be integers with trailing zeros like 123000.0
.
Yes, you are absolutely right. What happened is that some test numbers are triggering this case but the test code checks that the resulting string is translated back to the same number (correctness). Since the number I produce has trailing zeros after the decimal point the string translates back correctly.
I tried to use double_conversion::DoubleToStringConverter::EcmaScriptConverter()
as "golden" generator but it rounds numbers better: 2048.00024 is rounded to 2048.0003 with grisu2 and to 2048.0002 with DoubleToStringConverter::ToShortestSingle.
I am still checking this.
However, back to my bug. I fixed it with the additional check. Please tell me what you think.
if (cut_early) {
uint32_t p1_remainder = 0;
uint32_t ten_n = 1;
unsigned power = 0;
uint32_t delta_shifted = (delta - p2) >> e_shift; // valid due to cut_early
bool check_increase_power = true;
if (delta_shifted > 0) {
power = base::CountDecimalDigit32(delta_shifted);
ten_n = base::Power10(power);
p1_remainder = p1 % ten_n;
if (p1_remainder > delta_shifted) {
--power;
ten_n /= 10;
p1_remainder %= ten_n;
check_increase_power = false;
}
DCHECK_LE(p1_remainder, delta_shifted);
p1 /= ten_n;
length -= power;
}
if (check_increase_power) {
while (p1 % 10 == 0) {
++power;
ten_n *= 10;
p1 /= 10;
--length;
}
}
SetDigits(p1, length, buffer);
......
@abolz Sorry for spamming you with my buggy versions :)
I've updated my ghist
and I've fixed the unit test so that I compare now the string output to double_conversion output to be sure. Besides the ambiguous input of 0.000244140625f
all pass now.
Sorry for spamming you with my buggy versions :)
No problem :-) I appreciate your comments and, again, simple but great idea to use SetDigits
if possible!!
I don't have the time right now to look closely at your code or do some benchmarks. Just a few comments: Whether to optimize the cut_early
branch really depends on what numbers you are trying to convert. For uniformly distributed numbers (double-precision) this branch is almost never taken, so a simple loop with /10
and %10
is probably ok. That said, my implementation of this branch uses something like a SetDigitsUntil(n, rest)
which produces the leading digits until the remainder is <=rest
(generating two digits at once if possible). It does indeed speed up the branch and the idea was to shift (alpha,gamma)
such that this branch is taken more often. But it didn't work out... Well, using (alpha,gamma)=(-28,0)
(you basically then have a 64-bit p1
and a 32-bit
p2) did speed up my benchmarks for uniformly distributed double-precision numbers, but not that much...
I have experimented with the code a little bit. The code now works like this: First, unconditionally generate all the digits of p1
using an optimized SetDigits
function and then check whether too many digits have been generated. This is almost never the case (for the data sets I have tested). Using (alpha,gamma)=(28,0)
didn't work very well, but (alpha,gamma)=(-57,32)
allows to generate two digits at once in the loop for p2
. Generating always two digits and later checking whether one digit too much has been produced was a little bit faster than generating the digits one by one. This requires a slightly larger table but the code is now somewhat faster.
I have tested the code with random numbers and vertex coordinates from different 3d models, but whether the current code is faster, depends somewhat on the numbers being converted. If you have some real world data sets it would be nice if you could run some benchmark and share the results. I'll publish my benchmark code once I have cleaned it up, but basically it is a loop like
std::vector<double> values = ...
for (auto v : values) {
char buf[32];
fast_dtoa::ToString(buf, buf + 32, v);
}
I am attaching a csv file containing 3 columns with float numbers. It's somewhat reflects the data I handle, which is mostly coming from human based measurements (i.e. "decimal" in its nature). I believe it will have less impact on p2 part but maybe I am wrong.
Btw for such numbers I succeeded to write a good and fast compressor using grisu algorithm. It compresses much! better than zstd on binary doubles with less cpu :)
The new version is slightly faster for your csv file when just converting the number to decimal. I'm not sure if it really makes a difference when used in a larger program...
Btw for such numbers I succeeded to write a good and fast compressor using grisu algorithm. It compresses much! better than zstd on binary doubles with less cpu :)
Great! It would be interesting to see how your algorithm works :-)
I hope to write a blog post or article about this. But in short I translate a batch of doubles into decimal representation (val, exp, decimal_length) using grisu. where "val * 10^exp = d". decimal_length is decimal length of decimal representation of val. val is 64 bit integer.
Once I have those triples I find and "optimal" exponent that covers as many values as possible with small decimal length. some values are multiplied by powers of ten in order to reduce their exponent and to be part of the group. Eventually I have a single "exponent" that with "small" numbers can reproduce the original doubles and "exceptions" - those doubles that did either have long decimal length or it was not possible to unite them under a chose exponent. So I have 2 sections and each one of them I encode differently. I get ~75% times compression rate compared to regular methods that achieve ~40% and are much slower.
On Mon, Dec 4, 2017 at 1:53 PM, abolz notifications@github.com wrote:
The new version is slightly faster for your csv file when just converting the number to decimal. I'm not sure if it really makes a difference when used in a larger program...
Btw for such numbers I succeeded to write a good and fast compressor using grisu algorithm. It compresses much! better than zstd on binary doubles with less cpu :)
Great! It would be interesting to see how your algorithm works :-)
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/abolz/Grisu/issues/1#issuecomment-348940996, or mute the thread https://github.com/notifications/unsubscribe-auth/ADgSiJzHOaInar9iGvOOCvov9EbkAPtMks5s890zgaJpZM4QjsCV .
-- Best regards, Roman
For example, 123456 10^-4 and 768 10^ -2 can be united under exponent -4 with values 12356 and 76800, but 89898989 10^6 can not be translated to 898989890000000000 10^-4 because 898989890000000000 does not fit into 64bit number. (or maybe it can fit but it's not worth it).
So 89898989 * 10^6 will be encoded in the "exceptions" section.
On Mon, Dec 4, 2017 at 9:56 PM, Roman Gershman romange@gmail.com wrote:
I hope to write a blog post or article about this. But in short I translate a batch of doubles into decimal representation (val, exp, decimal_length) using grisu. where "val * 10^exp = d". decimal_length is decimal length of decimal representation of val. val is 64 bit integer.
Once I have those triples I find and "optimal" exponent that covers as many values as possible with small decimal length. some values are multiplied by powers of ten in order to reduce their exponent and to be part of the group. Eventually I have a single "exponent" that with "small" numbers can reproduce the original doubles and "exceptions" - those doubles that did either have long decimal length or it was not possible to unite them under a chose exponent. So I have 2 sections and each one of them I encode differently. I get ~75% times compression rate compared to regular methods that achieve ~40% and are much slower.
On Mon, Dec 4, 2017 at 1:53 PM, abolz notifications@github.com wrote:
The new version is slightly faster for your csv file when just converting the number to decimal. I'm not sure if it really makes a difference when used in a larger program...
Btw for such numbers I succeeded to write a good and fast compressor using grisu algorithm. It compresses much! better than zstd on binary doubles with less cpu :)
Great! It would be interesting to see how your algorithm works :-)
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/abolz/Grisu/issues/1#issuecomment-348940996, or mute the thread https://github.com/notifications/unsubscribe-auth/ADgSiJzHOaInar9iGvOOCvov9EbkAPtMks5s890zgaJpZM4QjsCV .
-- Best regards, Roman
-- Best regards, Roman
Hi @abolz , I just looked for available grisu implementations and I saw you just wrote this one. Is it possible to use your code as basis for my studies? I am experimenting with floating point compression.
Roman