Closed kirillkh closed 10 years ago
By the way, I use 32 terms in my implementation (as opposed to 16 in paper.js), and I still see this issue, which is why I think it is a bug and not merely approximation artifact (because approximation artifacts can be made arbitrarily small by increasing the number of terms).
Hmm, apparently it only happens when all the points are almost collinear like in my example above, and the curve has a very sharp angle.
Interesting! It's been a long while that I wrote that code. I'm not quite sure where the assumption about the range [a, b] came from anymore. The better fix then would be to split the curve at a, interpolate b to the range of the new curve (_b), and integrate over [0, _b]. Will investigate!
Interesting... So I implemented the described splitting method in a sketch, and got almost the same result as the original approach (a difference of 1.4210854715202004e-14
). Then I created a loop that just adds up line lengths of tiny steps along the curve to compare it to, and it appears that it is your approach that's leading to imprecise results ([0, b] - [0, a]), suggesting that you should probably integrate with a higher iteration count there. I will do more testing with higher counts, need to create the tables for that first.
Another update: So I extend the lookup tables in my local version and did the integration with 18 instead of 16 terms, and here the results. They further suggest that my analysis above is correct. Perhaps you've set up your tables incorrectly?
res1: [0, b] - [0, a]: 36.9097712760088
res2: [a, b]: 36.725511647691256
res3: [_a, _b]: 36.72551164769126
res4: linear: 36.726130387873255
res2 - res1: -0.18425962831754106
res3 - res2: 7.105427357601002e-15
res4 - res2: 0.0006187401819985894
I've been very busy last couple of weeks, but I will get to it eventually.
Alright, feel free to reopen with new information.
Well, I've looked into the accuracy of integration, and all your code is definitely correct in both cases. At the same time, there is something to learn from the case I provided. Namely, the Gaussian integration doesn't behave well in the presence of high curvature points inside the curve. The explanation is in the way Gaussian quadrature works: the integration grid is more dense near 0 and 1 (the ends of the interval) than in the middle. Which means that there will always be curves that will cause a significant loss of precision unless your algorithm is able to accomodate high curvature points. One way is to use an adaptive integration scheme. Another is to do analysis on each curve to determine which regions are smooth and then integrate each region separately. The latter method is more difficult to implement, but for me the resulting code works faster. With randomized testing, I went from 8 bits of precision for the same non-adaptive method you are using to 21 bits of precision (with 32 slices in both cases).
@kirillkh very interesting! Would you be willing to share that code?
The code is proprietary at this time, but I'll let you know if I decide to release it as open source eventually.
I outlined the ideas for the algorithm here: http://pomax.github.io/bezierinfo/#bezierinfo-comment-172 Since that happened almost a year ago, I can't remember what additional tweaks I made to the code after I wrote that comment, but it describes the gist of the method. The algorithm employs a geometrical trick to find an approximation to the highest-curvature point (peak), but in practice the approximation is so good that I never felt a need to improve upon it.
Now that I gave it a fresh gaze, I saw some typos and errors in the explanation, but the algorithm itself is solid and works great in practice.
Hi! I am using paper.js's source code as a reference to optimize my own code that deals with arc length. In particular, I was lacking Newton-Raphson in my inverse parameterization algorithm, and the one you use proved to be just the thing I needed.
But one thing is not working well for me. In Numerical.integrate() you do something I haven't seen in other places, namely you integrate between two t parameters a and b (as opposed to always setting a=0). I don't know enough math to figure out whether it is correct or not analytically, but in practice it introduces accuracy issues, demonstrated by the following code:
I would be happy to hear any input you have on this.