dhermes / bezier

Helper for Bézier Curves, Triangles, and Higher Order Objects
Apache License 2.0
264 stars 36 forks source link

Detect bad parameterizations #21

Open dhermes opened 7 years ago

dhermes commented 7 years ago

e.g. with curves, this leads to larger than expected loss of accuracy when doing intersections. As a (somewhat contrived) example encountered in the wild:

>>> import numpy as np
>>> import bezier
>>> nodes1 = np.asfortranarray([
...     [2.125, 2.8125],
...     [2.125, 2.9375],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=1, _copy=False)
>>> nodes2 = np.asfortranarray([
...     [2.1015625  , 2.875],
...     [2.16015625 , 2.875],
...     [2.220703125, 2.875],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2, _copy=False)
>>> from bezier import _intersection_helpers
>>> I, = _intersection_helpers.all_intersections([(curve1, curve2)])
>>> I.s.hex()
'0x1.0000000000020p-1'
>>> I.t.hex()
'0x1.983e62b67ad27p-3'
>>> # expected 4 sqrt(57) - 30, done in high precision
>>> expected_t = float.fromhex('0x1.983e62b67adeep-3')
>>> np.abs((expected_t - I.t) / np.spacing(expected_t))
199.0

This comes from the bad parameterization of curve2:

x(s) = (s^2 + 60 s + 1076) / 512
y(s) = 23 / 8
==>
f(x, y) = (23 - 8 y)^2 / 64
---
x(s   - 30) = 11 / 32 + s^2 / 512
x(s i - 30) = 11 / 32 - s^2 / 512

which should instead be

x(s) = (61 s + 1076) / 512
y(s) = 23 / 8
==>
f(x, y) = (23 - 8 y) / 8

figure_1


Same root, but non-"bad" parameterization

However, moving the y-values so that curve2 is no longer badly parameterized doesn't fix things:

>>> nodes3 = np.asfortranarray([
...     [2.1015625  , 2.8125],
...     [2.16015625 , 2.875 ],
...     [2.220703125, 2.9375],
... ])
>>> curve3 = bezier.Curve(nodes3, degree=2, _copy=False)
>>> I2, = _intersection_helpers.all_intersections([(curve1, curve3)])
>>> I2.s.hex()
'0x1.983e62b67ada7p-3'
>>> I2.t.hex()
'0x1.983e62b67ad27p-3'
>>> np.abs((expected_t - I2.t) / np.spacing(expected_t))
199.0
>>> # Now the s-value is ALSO expected to be 4 sqrt(57) - 30
>>> np.abs((expected_t - I2.s) / np.spacing(expected_t))
71.0

figure_1

NOTE: curve3 lies on the algebraic curve given by x = (256 y^2 + 480 y + 929) / 2048, so the quadratic parameterization is appropriate.


Newton's Method

What happens if we keep going with Newton's method on the "bad" intersection:

>>> st_values = [(I.s, I.t)]
>>> for _ in range(2):
...     st_values.append(
...         _intersection_helpers.newton_refine(
...             st_values[-1][0], nodes1, st_values[-1][1], nodes2))
...
>>> st_values
[(0.5000000000000036, 0.19933774108299326),
 (0.5, 0.19933774108300079),
 (0.5, 0.19933774108300079)]
>>> len(set(st_values))
2
>>> st_values[-1][0].hex()
'0x1.0000000000000p-1'
>>> st_values[-1][1].hex()
'0x1.983e62b67ae36p-3'
>>> np.abs((st_values[-1][1] - expected_t) / np.spacing(expected_t))
72.0

Newton's method terminates because there is a fairly large band where B1(s) = B2(t) numerically (with parameterizations):

x1(s) = 17 / 8 = 1088 / 512 = 2.125
x2(t) = (t^2 + 60 t + 1076) / 512
y1(s) = (2 s + 45) / 16
y2(t) = 23 / 8 = 46 / 16

This is due to the relative size mismatch in the coefficients of t^2 + 60 t + 1076. Centering a band of 401 ULPs around expected_t (200 on either side), we find that >= 135 of them evaluate x2(t) to exactly 2.125 using 5 different methods. Using the Bernstein basis and de Casteljau algorithm, 196 of them evaluate to 2.125. Using Horner's method, an entire contiguous band of 68 ULPs on either side of expected_t evaluate to exactly 2.125.

In particular:

>>> val = float.fromhex('0x1.983e62b67ae36p-3')
>>> de_cast00 = 269.0 / 128.0 * (1.0 - val) + 553.0 / 256.0 * val
>>> de_cast01 = 553.0 / 256.0 * (1.0 - val) + 1137.0 / 512.0 * val
>>> de_cast = de_cast00 * (1.0 - val) + de_cast01 * val
>>> de_cast
2.125
>>> de_cast.hex()
'0x1.1000000000000p+1'

And is it the same for the non-"bad" one:

>>> st_values = [(I2.s, I2.t)]
>>> for _ in range(2):
...     st_values.append(
...         _intersection_helpers.newton_refine(
...             st_values[-1][0], nodes1, st_values[-1][1], nodes3))
...
>>> st_values
[(0.19933774108299682, 0.19933774108299326),
 (0.19933774108300079, 0.19933774108300079),
 (0.19933774108300079, 0.19933774108300079)]
>>> len(set(st_values))
2
>>> st_values[-1][0].hex()
'0x1.983e62b67ae36p-3'
>>> st_values[-1][1].hex()
'0x1.983e62b67ae36p-3'
>>> np.abs((st_values[-1][1] - expected_t) / np.spacing(expected_t))
72.0

Since x3(t) = x2(t) and y3(t) = (2 t + 45) / 16 = y1(t), once s = t, we are subject to the same issues as in the intersection of curve1 and curve2.


"Exact" values

However, this may just be an issue with polynomials? Or Newton's method? Computing the roots of s^2 + 60 s - 12 via the quadratic formula even yields a decent amount of error (unless cancellation is avoided):

>>> a, b, c = 1.0, 60.0, -12.0
>>> sq_discrim = np.sqrt(b * b - 4.0 * a * c)
>>> v1 = (-b + sq_discrim) / (2.0 * a)
>>> v1.hex()
'0x1.983e62b67ae00p-3'
>>> np.abs((expected_t - v1) / np.spacing(expected_t))
18.0
>>> v2 = (2.0 * c) / (-b - sq_discrim)
>>> v2.hex()
'0x1.983e62b67adeep-3'
>>> np.abs((expected_t - v2) / np.spacing(expected_t))
0.0
>>> _, v3 = np.sort(np.roots([a, b, c]))
>>> v3.hex()
'0x1.983e62b67adeep-3'
>>> np.abs((expected_t - v3) / np.spacing(expected_t))
0.0
>>> _, v4 = np.sort(np.polynomial.polynomial.polyroots([c, b, a]))
>>> v4.hex()
'0x1.983e62b67ae00p-3'
>>> np.abs((expected_t - v4) / np.spacing(expected_t))
18.0

Algebraic

Though the algebraic approach doesn't (necessarily) suffer from the same issue:

>>> from bezier import curve
>>> ALGEBRAIC = curve.IntersectionStrategy.algebraic
>>> I3, = _intersection_helpers.all_intersections(
...     [(curve1, curve2)], strategy=ALGEBRAIC)
>>> I3.s.hex()
'0x1.0000000000000p-1'
>>> I3.t.hex()
'0x1.983e62b67ae00p-3'
>>> np.abs((expected_t - I3.t) / np.spacing(expected_t))
18.0
>>>
>>> I4, = _intersection_helpers.all_intersections(
...     [(curve1, curve3)], strategy=ALGEBRAIC)
>>> I4.s.hex()
'0x1.983e62b67ae00p-3'
>>> I4.t.hex()
'0x1.983e62b67ae00p-3'
>>> I4.s == I4.t == I3.t
True
dhermes commented 7 years ago

Heart of the problem:

(s^2 + 60 s) / 512 =   3 / 128 = '0x1.8000000000000  p-6'
                               = '0x0.030000000000000p+1'
        1076 / 512 = 269 / 128 = '0x1.0d00000000000  p+1'

for example

                   s = s* + 2^(-55)
                     = '0x1.983e62b67adefp-3'
==> s (s + 60) / 512 = 3/128 + 2^(-58)
                     = '0x1.8000000000001  p-6'
                     = '0x0.030000000000002p+1'
----
                   s = s* + 2^(-54)
                     = '0x1.983e62b67adf0p-3'
==> s (s + 60) / 512 = 3/128 + 2^(-57)
                     = '0x1.8000000000002  p-6'
                     = '0x0.030000000000004p+1'

So anything that differs in the last 6-7 bits from the true value of s ends up contributing to bits of (s^2 + 60 s) / 512 that just get dropped.

If we replace s with S - 30 then we end up with

(S^2 + 176) / 512

and literally only '0x1.e3307cc56cf5cp+4' (S = 30.199337741083) makes that expression equal to 2.125. This is because we have less wiggle room between the values:

176 / 512 = 11 / 32 = 0.34375 = '0x1.6000000000000 p-2'
                                '0x0.58000000000000p+0'
S^2 / 512 = 57 / 32 = 1.78125 = '0x1.c800000000000 p+0'

and the non-constant part has the bigger exponent, meaning the "wrong" bits don't get dropped.

Post-script:

This S isn't that helpful, we can shift back to s = S - 30 (even without round-off), but we end up with the wrong answer:

>>> S = float.fromhex('0x1.e3307cc56cf5cp+4')
>>> s = S - 30.0
>>> s.hex()
'0x1.983e62b67ae00p-3'
>>> from sympy import Rational as R
>>> R(S) - 30 == R(s)  # Check that there is no round-off in `s`
True

It is just "missing" the last 8 bits / 2 hex digits, which corresponds to an 18 ULP error:

>>> import numpy as np
>>> expected_s = float.fromhex('0x1.983e62b67adeep-3')
>>> (s - expected_s) / np.spacing(s)
18.0
>>> 0xe00 - 0xdee
18
dhermes commented 7 years ago

Starting with our target value, we want to find the "exact" interval which must round to that value:

>>> import numpy as np  # 1.12.0
>>> import sympy  # 1.0
>>> 
>>> correct_output = 2.125
>>> output_spacing = sympy.Rational(np.spacing(correct_output))
>>> correct_output = sympy.Rational(correct_output)
>>>
>>> half_left = correct_output - output_spacing / 2
>>> half_right = correct_output + output_spacing / 2

These values are half a ULP away from the desired output. Then we can evaluate our polynomial exactly (using sympy.Rational objects):

>>> def poly_eval(val):
...     return ((val + 60) * val + 1076) / 512
...

We want to consider exact floating point values nearby our expected t:

>>> correct_input = float.fromhex('0x1.983e62b67adeep-3')
>>> input_spacing = sympy.Rational(np.spacing(correct_input))
>>> correct_input = sympy.Rational(correct_input)

Considering 200 ULPs around the expected t, we track the ones that end up in our interval that must round to 2.125

>>> import six  # 1.10.0
>>> 
>>> accepted = []
>>> for delta in six.moves.xrange(-200, 200 + 1):
...     curr_input = correct_input + delta * input_spacing
...     curr_output = poly_eval(curr_input)
...     if half_left <= curr_output <= half_right:
...         accepted.append(delta)
... 

It turns out, this amounts to the interval of values 67 ULPs (in either direction) from the expected t

>>> accepted == list(six.moves.xrange(-67, 67 + 1))
True

The "best" answer is not exactly correct, but is within less than 0.05% of a ULP residual:

>>> error = poly_eval(correct_input) - correct_output
>>> float(error / output_spacing)
-0.0004519444865033115
dhermes commented 7 years ago

As an aside, a method to find a valid parameterization is described in a technical report of Manocha and Canny. (Two other reports are also useful, citation 1, citation 2, citation 3)

The technical report also lists a paper of Sederberg