w3c / csswg-drafts

CSS Working Group Editor Drafts
https://drafts.csswg.org/
Other
4.42k stars 652 forks source link

[css-easing-1] No precision requirement or recommended algorithm for cubic Beziers #4014

Open kevers-google opened 5 years ago

kevers-google commented 5 years ago

https://drafts.csswg.org/css-easing-1/#cubic-bezier-easing-functions

The spec states "The evaluation of this curve is covered in many sources such as [FUND-COMP-GRAPHICS]." No recommended algorithm or required error tolerance is provided. Chrome is presently failing several WPT tests with easing functions due to a performance vs precision trade off in numerically estimating Bezier outputs. It would be nice to lock down the precision requirements and update tolerances on test expectations accordingly.

It would also be nice to have a recommended algorithm for root finding: whether it be the bisection method, a Newton - bisection hybrid, or Cardano's method to name a few. Estimating the error in the output function for a given tolerance in the root estimation is non-trivial. Requiring full precision may be prohibitively expensive.

stephenmcgruer commented 5 years ago

To add some background to this issue; kevers@ is currently looking at our (Chrome's) cubic bezier implementation, in particular why we fail the WPT tests such as https://wpt.fyi/results/css/css-easing (and some of the related CSS and web animation tests).

As is common, it is a question of performance vs accuracy; Chrome's implementation first tries 8 iterations of Newton's approximation, then if that fails to find an answer with an acceptably small error rate it falls back to doing a bisection until again we have a sufficiently small error (or until we fail). We also use quite a coarse error epsilon, again for performance. This is different from the bar used in the tests, which IIRC what kevers@ told me is 30 rounds of the bisection method.

kevers@ is investigating the performance impact of improving our accuracy here, but we noticed that there is no comment on or allusion to acceptable error in the spec. The WPT tests often use answer +- 0.01, but it is unclear how or why this was chosen. I believe it would help interop if we could agree on a reasonable way to define the expected accuracy, or otherwise at least spec (perhaps via a non-normative notice) that results may change slightly between web agents due to performance considerations when modelling bezier curves.

stephenmcgruer commented 5 years ago

cc @birtles @grorg @ChumpChief (listed editors of the spec)

AmeliaBR commented 5 years ago

For comparison, the SVG spec on path-length computation has the following statement:

It is recommended that authoring products and user agents employ algorithms that produce as precise results as possible; however, to accommodate implementation differences and to help distance calculations produce results that approximate author intent, the ‘pathLength’ attribute can be used…

We may have other text elsewhere in the spec that clearly states that path drawing approximations are allowed. Of course, these are a big problem when we then try to make ref tests for SVG rendering! And we haven't yet figured out a solution for defining how much variation is allowed.

birtles commented 5 years ago

This is different from the bar used in the tests, which IIRC what kevers@ told me is 30 rounds of the bisection method.

Gecko looks at the slope and decides which mechanism to use:

https://searchfox.org/mozilla-central/rev/153172de0c5bfca31ef861bd8fc0995f44cada6a/dom/smil/SMILKeySpline.cpp#60-89

For bisection it uses a maximum of 10 iterations.

kevers-google commented 5 years ago

Thanks Brian.