Open PhilterPaper opened 2 months ago
If you have a system that has this test failing in 0.087 maybe you can take a look. It is the call to
bogen(80.00,230.00 125.00,275.00 45.00 move=0 large=0 dir=1 rot=0.00)
This expands to 4 curves in all systems:
curve( 85.91,230.00 91.76,231.16 97.22,233.43 )
curve( 102.68,235.69 107.64,239.00 111.82,243.18 )
curve( 116.00,247.36 119.31,252.32 121.57,257.78 )
curve( 123.84,263.24 125.00,269.09 125.00,275.00 )
In a system that has insufficient accuracy, there's a 5th curve:
curve( 125.00,275.00 125.00,275.00 125.00,275.00 )
This is an extremely small curve that should not be there. Apparently the calculation misses the endpoint due to math precision.
No, I do not have failures installing 0.087. It sounds like the "tiny last B-spline" problem you found in PhilterPaper/Perl-PDF-Builder/issues/208.
I thought it might be the t-test extended-precision floating point problem I encountered in some of my libraries, but now it sounds more like the bogen precision issue you raised. I really have to get 3.027 out the door as my top priority, but it sounds like I need to fix this issue in PDF::Builder and SVGPDF bogen code as a very high priority right after that.
I'll leave this open for now, to remind me that I owe you a patch for the bogen/elliptical arc code I gave you for SVGPDF. If you make your own fix, please close and let me know that there's some code for me to look at.
Given your interest in tree rats :-) you might find interesting https://www.youtube.com/watch?v=_tW9pfCqIgY between 20:45 and 22:35.
I don't know exactly what problem you encountered with this test, but it mentions failing on an extended precision library (I presume in the CPANTS results). I saw this problem with PDF::Table, where direct comparisons of "raw" floating point numbers would fail because test systems built with different extended precision libraries would produce slightly different results (starting at perhaps the 20th significant digit). The solution I came up with was to round the raw floating point numbers to a much smaller number of significant digits (6 to 12), which are the same no matter what the library or hardware used is, and compare against that short result. So far, it's worked well, but sometimes needed several iterations of tests to catch all the cases (i.e., a test now failing later, and showing a different floating point result that's failing). More significant digits than that are seldom of importance in this kind of work.