Open graouts opened 2 years ago
That all makes sense to me. Thanks for picking this up.
Is there a related CSS value interpolation API?
Is there a related CSS value interpolation API?
I hadn't considered that, but it's something that's worth exploring alongside this proposal. Would you have something to propose?
@graouts
Is there a related CSS value interpolation API?
I hadn't considered that, but it's something that's worth exploring alongside this proposal. Would you have something to propose?
I'm not super experienced in interpolation and/or making proposals for the csswg, I wouldn't mind though, however, I'm mostlikely going to need some help.
cc @mattgperry
Hey @graouts
This came out of a discussion between @okikio and myself about the limitations custom effects leaves us with. I'm super excited about custom effects but I think there's a remaining black hole when talking about interpolating complex values like KeyframeEffect
that would limit the ability for many JS libraries to leverage existing interpolators already present in the browser.
My first thought was having CustomEffect
also optionally support keyframes but I think this limits what is clearly a low-level API. For example, it wouldn't allow for custom easing functions like circIn
etc. Whereas a stand-alone interpolator API would:
const mixColors = new Interpolation("#f00", "rgba(255,255,255,0.5)")
document.timeline.animate(p => {
arbitraryElement.innerHTML = mixColors(circIn(p))
})
Ideally the interpolator would support unclamped progress values so we could support overshoot easing.
Supported interpolators that would be helpful:
20px
↔️ 5px
, 100%
↔️ 0
) blue
↔️ rgba(0,0,0, .5)
)none
↔️ scale(3) translateX(100px)
)5px 5px rgba(0,0,0,0.1)
↔️ 10px 10px rgba(0,0,0,0.05), 2px 2px 2px rgba(0,0,0,0.2)
)All of these (and more) are already leveraged within browsers, I think direct access would pair very well with CustomEffect
.
In addition to what @mattgperry posted something like this would also be awesome,
CSS.mix("50٪ by ease", "red", "blue") // purple (in rgb format)
CSS.mix("50٪ by ease", "currentColor", "blue", document.querySelector(".red-text")) // purple (in rgb format)
// Easing is linear by default
// Percentages function like they would on normal CSS
CSS.mix(0.5, "100%", "200", document.querySelector(".red-text"), "width") // 150px
The API would very similar to the currently discussed CSS counter part mix()
https://github.com/w3c/csswg-drafts/issues/581#issuecomment-926353789
See also #6697 and #6700.
The section on custom effects in the current level 2 spec starts with this issue:
This whole feature needs to be revisited. The current thinking is that rather than having custom effects, we should simply have an onupdate callback on each animation effect. That would allow, for example, augmenting an existing effect with a function that performs logging or triggers additional actions at certain times. With the current arrangement, doing this would require adding a parent group just to achieve this.
I personally think that a dedicated
CustomEffect
interface is a simple and purposeful way to specify an animation where its application is performed by script. I expect that it is simpler to specify how this specific class of effects would work rather than trying to add anonupdate
callback that would apply to keyframe effects as well.
I think that the timing with respect to other effects needs to be clear either way. I think having a set of post-animation update callbacks is probably simpler to be honest.
The other thing that may be cleaner about an update callback is that we may be able to skip it if the effect easing resulted in no change. E.g.
canvas.animate({}, {duration: 1000, easing: steps(10)}).addEventListener('update', (localTime) => {
// Only needs to be called 10 times?
});
You'll also notice the lack of a
target
forCustomEffect
. I believe that it should not be necessary to specify a target for a custom effect since it may not target a single node, or even a node at all, but rather a JS object controlling the rendering of a scene using<canvas>
APIs.
I agree that conceptually it makes sense to not have a target, though it could be a nice feature if we could skip custom effects if the target (e.g. the canvas being drawn to) was not in view or at least if the target is detached.
I think @bramus also had a good use case where an update callback would be much simpler ergonomically than needing to create a separate animation.
Thanks for pointing me to this thread, @flackr.
I’m currently building a demo that uses a scroll-driven animation on an input[type=range]
. The progress of that SDA is then used to rotate a 3D element on the page in response. You can try it out in Chrome Canary with the Experimental Web Platform Features flag enabled: as you drag the slider, the 3D model rotates: https://codepen.io/bramus/pen/VwoYoLR
One thing I found missing while building this is having progress
event on the Animation
itself to hook this all onto. Yes, I can read the animation’s progress, but I need a trigger for when to read it.
Right now, I rely on the input’s input
+change
events to read the animation progress. While using these two events in this case here works, my reflex as an author is to listen to the animation’s progress instead as my brain is in animation mode, not in input mode. Would such a progress event be available, I can easily take the animation + listener and attach it to some other input mechanisms in CSS (such as a regular ScrollTimeline
) without needing to set up other JS listeners for other types of events (such as scroll
).
Using a CustomEffect
would be possible here, but that would require me to create a CustomEffect
in JS on top of the already CSS existing animation. This feels like a lot of extra work just to get notified of an already existing animation’s tick.
I believe a progress
or update
event on the animation would be a more convenient way for authors to achieve what I want to do here.
Another situation: Part of the sda-utilities
package I created is a trackProgress
function. Authors can use this to Synchronize videos, 3D-models, etc. to Scroll-Driven Animations but also to sync things to DocumentTimeline
-based animations as well as all future types of timelines we can come up with (e.g. PointerTimeline
and MediaPlaybackTimeline
)
The implementation of that function itself pretty nasty as it relies on requestAnimationFrame
to constantly read the progress of the animation.
const trackProgress = (animation, cb, precision = 5) => {
const updateValue = () => {
let newProgress = animation.effect.getComputedTiming().progress * 1;
if (animation.playState === 'finished') newProgress = 1;
newProgress = Math.max(0.0, Math.min(1.0, newProgress)).toFixed(precision);
// … (pass progress into cb)
requestAnimationFrame(updateValue);
};
requestAnimationFrame(updateValue);
};
With animation.progress
available I could replace some of the logic to use animation.progress
, yet it would not allow me remove the requestAnimationFrame
as I want the library to support any type of timeline, current and future.
-let newProgress = animation.effect.getComputedTiming().progress * 1;
-if (animation.playState === 'finished') newProgress = 1;
-newProgress = Math.max(0.0, Math.min(1.0, newProgress)).toFixed(precision);
+let newProgress = animation.progress;
Again, an update
or progress
event on the existing animation would offer a way out here, as that allows me to ditch requestAnimationFrame
.
TBH, I don't see much difference between the 2 methods, except for the effect's target
.
Animation
that wraps them with a corresponding timeline
.KeyframeEffectOptions
that sets their timing options.And regarding using the target to play/pause:
I agree that conceptually it makes sense to not have a target, though it could be a nice feature if we could skip custom effects if the target (e.g. the canvas being drawn to) was not in view or at least if the target is detached.
I think the playback management would better be solved using AnimationTrigger
's, though having the animation GC'd when a target is removed could be nice.
In @bramus's example the main issue not being able to get the the pseudo-element, unless you create grab the animation set by CSS and take the animation.timeline
, so it could look something like this:
const timeline = $input.getAnimations()[0].timeline;
const effect = new CustomEffect(progress => { … }, {
fill: 'both',
direction: 'reverse'
});
const animation = new Animation(effect, timeline);
animation.rangeStart = 'contain 0%';
animation.rangeEnd = 'contain 100%';
animation.play();
We could also decide that CustomEffect
also takes target
, and then provide it as argument for the update callback. Could be useful for reusable drawing functions.
I think that the timing with respect to other effects needs to be clear either way. I think having a set of post-animation update callbacks is probably simpler to be honest.
Regarding sync between other effects, I suppose this is something that should probably be left for nesting effects via GroupEffect
. We would need to define the order of operations of different effects.
But this could also be a good incentive for making progress on GroupEffect
's (:
So to summarize, for me this boils down to ergonomics. So naming some use-cases to make this more concrete:
There are basically 2 goals here:
In most of the use-cases I personally encounter in my work it's mostly more straightforward to use the CustomEffect
syntax, because syncing with other animations is less of an issue.
But I also don't see a reason why we can't have both?
And regarding using the target to play/pause:
I agree that conceptually it makes sense to not have a target, though it could be a nice feature if we could skip custom effects if the target (e.g. the canvas being drawn to) was not in view or at least if the target is detached.
I think the playback management would better be solved using
AnimationTrigger
's
When I say skip, this is a (up to now implementation) detail that affects main frame generations. There are many sites with a lot of complex content that is not yet scrolled on screen (or has been scrolled off screen). Conceptually all animations tick all of the time, but since the offscreen animations do not present any visual change chrome can skip generating frames if all of the updating content is offscreen. When a main frame is generated all animations are updated to the current time such that all styles are as if the animation has been active.
With a custom effect callback, we could not do this sort of an optimization unless it was part of the API.
This skipping isn't about playback management, when we skip animations they conceptually continue playing, this is about optimizing frame generation. I also don't think this is something that developers usually do unless, as you alluded to, they intentionally want to pause playback. Most developers I've talked to don't manage all of the content on the site.
With CSS animations it didn't need to be in spec since a developer wouldn't be able to tell as if they request an animation frame the animations are updated then. However, we wouldn't know for custom effects what they change so it would need to be specified if they are skipped in certain circumstances.
, though having the animation GC'd when a target is removed could be nice.
And, having a rooted node to get the animations from for getAnimations
With a custom effect callback, we could not do this sort of an optimization unless it was part of the API.
Thanks for in-depth explanation! I get it now. But it seems that the only difference between the two is having a target, right?
My main concern here is that creating a dummy KeyframeEffect
with an empty object seems a bit awkward for an API. I suppose it would be better to also keep the target
argument for CustomEffect
as well. Then the UA could still optimize frames as you mentioned.
Most developers I've talked to don't manage all of the content on the site.
Yes, I guess working on a large scale, generic tool forced us to take less chances and manage these more strictly.
And, having a rooted node to get the animations from for getAnimations
Right! So adding the target
argument should enable that too.
But it seems that the only difference between the two is having a target, right?
And that if you want to simultaneously animate some css properties you'd generally end up creating multiple animations. My thinking was that by having a hook on regular animations you could run script driven animations in tandem with the css property update rather than having to set the two up separately. I imagine developers may often animate a custom property which drives the logic of the custom effect.
Right! So adding the target argument should enable that too.
Yeah, the optimization part might end up being a bit non-trivial to implement, e.g. the UA wouldn't necessarily know whether the developer would animate the position of the element, however, we should spec it to be able to skip calling the the animation update function in cases like content-visibility: hidden
or where the UA thinks the target element won't be visible (e.g. clipped or otherwise contained).
TBH, I don't see much difference between the 2 methods
My eventual goal is to ditch the requestAnimationFrame
, which has known performance implications. A progress
event would allow that, a CustomEffect
would too but it’s a lot of more work.
But I also don't see a reason why we can't have both?
If there is a progress
event, then something like an EmptyEffect
would be sufficient, no? As in: create the empty effect + use the progress
event listener to update the things onscreen. IUC this would allow the UA to reuse some of its already existing optimizations.
@flackr:
And that if you want to simultaneously animate some css properties you'd generally end up creating multiple animations.
I could be wrong here, but from my experience these cases are quite rare. So I wouldn't mind creating separate effects for those, of course, considering I can sync them together using Groups/Timelines.
I imagine developers may often animate a custom property which drives the logic of the custom effect.
Using this method isn't really called for if you're just animating CSS properties, unless you're doing hacks that aren't really possible today, like mixing it with Transitions, for delayed effects or velocity-based effects, etc.
Yeah, the optimization part might end up being a bit non-trivial to implement
OK, I'm mainly coming from an Author POV, and not an expert on the implementation side. So, nothing I can add to that point.
I can only say that the shape of the progress event API with an empty effect seems weird from my side.
@bramus:
My eventual goal is to ditch the
requestAnimationFrame
, which has known performance implications.
Of course, 100%.
A
progress
event would allow that, a CustomEffect would too but it’s a lot of more work.
Again, you guys know the impl. side.
If there is a progress event, then something like an EmptyEffect would be sufficient, no? As in: create the empty effect + use the progress event listener to update the things onscreen. IUC this would allow the UA to reuse some of its already existing optimizations.
Yeah, impl. wise. And if this resolves in a superior experience for the users then great.
Just hoping we can also get a solid-looking API for authors on the way.
I think a better explanation of how I see it is that if we define our MVP for this feature, same as we defined a simple fade-in animation on entry for scroll-animations, it would be to just start a loop with duration: Infinity
to play a loop, say, for playing an animation on a canvas.
I think the design should allow this use-case to be as straightforward as possible.
Another thing I thought of just now is that an Animation progress
event allows you to swap out timelines without needing to adjust your code to reading the progress.
For Scroll-Driven Animations you need a scroll listener to queue overallProgress
being read, for Time-Based Animations you need a timer to queue overallProgress
being read, … which means when the timeline changes you’d also need to change the queueing mechanism.
With a progress
event this is not needed, as your code keeps working, regardless of which timeline - or even which effect – is being used :)
I would like to revive work on custom effects, an idea currently unofficially drafted in Web Animations Level 2. I have filed a patch for WebKit to support the
CustomEffect
interface as well as a newdocument.timeline.animate()
method. The motivation is to bridge the gap between the poorly-named and rudimentaryrequestAnimationFrame()
and Web Animations allowing authors to harness the full power of the Web Animations model such that scripted animations may be paused, resumed, seeked, etc.Some example usage:
This code is equivalent to the more idiomatic:
The idea here is that
document.timeline.animate()
should be toCustomEffect
whatelement.animate()
is toKeyframeEffect
.The section on custom effects in the current level 2 spec starts with this issue:
I personally think that a dedicated
CustomEffect
interface is a simple and purposeful way to specify an animation where its application is performed by script. I expect that it is simpler to specify how this specific class of effects would work rather than trying to add anonupdate
callback that would apply to keyframe effects as well.You'll also notice the lack of a
target
forCustomEffect
. I believe that it should not be necessary to specify a target for a custom effect since it may not target a single node, or even a node at all, but rather a JS object controlling the rendering of a scene using<canvas>
APIs.That being said, I am very open to all feedback to work towards exposing callback-based animations in Web Animations. All work conducted in WebKit is behind an off-by-default experimental feature flag and we have no intention to expose this to the Web until we have consensus on the way forward.
Cc @birtles @flackr @stephenmcgruer @kevers-google @majido @smfr @grorg @hober @ogerchikov @BorisChiou @hiikezoe