Closed Jakobud closed 13 years ago
I think this is the simplest and most respectful (for the other tabs and user's cpu) solution...
setInterval( function () {
requestAnimationFrame( draw );
}, 1000 / 60 );
What happens if someone's computer is too slow to handle the geometry in the scene? Is there someway to do frame skipping in order to keep their framerate at the target rate but skipping certain frames in order to stay there?
Well, a proper way to deal with that is by animating using a timer instead (please, do it):
currentTime = new Date().getTime();
delta = currentTime - previousTime;
previousTime = currentTime;
object.position.z += delta;
Actually that solution is not right... using the Page Visibility API should be more correct. Albeit it would only work on webkit browsers by now:
setInterval( function () {
if ( ! document.webkitHidden ) requestAnimationFrame( draw );
}, 1000 / 60 );
There is also this other more correct solution but also harder to grasp:
var start = window.animationTime;
var rate = 10; // Hz
var duration = 10; // s
var lastFrameNumber;
function animate() {
var elapsed = window.animationTime - start;
if (elapsed < duration) {
window.requestAnimationFrame(animate);
}
var frameNumber = Math.round(elapsed/(1000/rate));
if (frameNumber == lastFrameNumber)
return;
lastFrameNumber = frameNumber;
// ... update the display based on frameNumber ...
}
window.requestAnimationFrame(animate);
I'll have to pick apart that last one to understand it a bit more. Thanks for the reply! They make sense now that I see them!
Where are you getting window.animationTime
?
you can probably replace that with new Date().getTime()
.
Any opinion on creating a new Date object on every frame?
Needs to be done.
Actually that is not completely true, As normally requestAnimationFrame should give the timestamp as first argument on its callback.
requestAnimationFrame( draw );
var draw = function(timeStamp){
};
Altough I don't know the support of this argument in the different browsers.
i remembered time not passed in safari's requestAnimationFrame. ended using Date.now()
if timestamp is not available.
As normally requestAnimationFrame should give the timestamp as first argument on its callback.
Oh wow, didn't knew that :)
i remembered time not passed in safari's requestAnimationFrame. ended using
Date.now()
if timestamp is not available.
No surprise there... :/
btw @mrdoob, any reasons to still use new Date().getTime();
if Date.now()
is faster? IE perhaps?
Cool, didn't know about Date.now()
.
Works on IE9 (and current stable Chrome / Firefox / Opera / Safari on Windows 7), maybe we can switch?
And there is a shim to make it work on older IEs:
http://jsperf.com/new-date-value/6
if ( ! Date.now ) {
Date.now = function() { return +new Date(); };
};
var datenow = Date.now;
Works pretty well on new browsers.
@alteredq nice, didn't know of that shim, although i haven't used three.js with <= ie8 to date :)
Didn't knew about Date.now()
either!
Works on IE9 (and current stable Chrome / Firefox / Opera / Safari on Windows 7), maybe we can switch?
Totally!
And there is a shim to make it work on older IEs:
Bah! ;)
Now that we are touching time topics, I finally started to experiment with Clock
class and putting time keeping out of XXXControl
classes. Just wanted to see how it looks:
https://github.com/alteredq/three.js/commit/f1f744f0ba1340badb9e3543be4fe7b0ccb0724e
https://github.com/alteredq/three.js/commit/093f17f38c974f3d7dda61ea03ac011bcd2db53f
If you think this is the right direction, I can change the rest.
General idea is to have all time inputs potentially controllable (no hidden time keeping somewhere deep in classes), so that for example we could do things like "matrix" effects (slow-motion, time-freeze, could be whole scene, or per-object), or for debugging supply fixed time steps.
Uhm, what do you think about using the brand new Timer.js? It matches the Audio object properties, which already allows slow-motion (playbackRate), looping, etc... Could go as THREE.Timer?
Just added a .currentDelta
property so it can work on delta dependent cases.
However, instead of MilliSeconds it uses Seconds (whoever decided to use Seconds on the Audio object...)
Oh, and I've been using it for displaying maya exported animations and the code gets reduced to this:
for ( var i = 0; i < skin.morphTargetInfluences.length; i++ ) {
skin.morphTargetInfluences[ i ] = 0;
}
skin.morphTargetInfluences[ Math.floor( timer.currentTime * 25 ) ] = 1;
25
being the FPS the animation is done at.
Hmmm, Timer.js
looks interesting, just it uses setInterval
(and it's fixed at 60 fps).
Clock
is completely passive, it just does stuff when you query it.
After requestAnimationFrame
proselytization setTimeout feels wasteful, but if you prefer, we can use Timer
.
After
requestAnimationFrame
proselytization setTimeout feels wasteful, but if you prefer, we can useTimer
.
Yup, totally understand that. But I didn't know how else the .playbackRate feature could be emulated. Someone on twitter suggested using __defineGetter__
instead of setInterval which I'll give it a go eventually.
You remember getters have terrible performance (not that it would matter here)?
So the main issue for Timer
is that for compatibility with Audio
it needs to respond to simple setting of value to naked playbackRate
property while the playback is running.
Compared to accessing pubilc variables it doesn't seem too bad now... http://jsperf.com/getter
This is some weird test, it's not apples-to-apples comparison.
Here regular properties are noticeably faster (63x in Chrome, 3x in Firefox):
Actually, my approach to timer.currentDelta
is incorrect. I think I'll give it a go to the __defineGetter__
approach right now :)
Here regular properties are noticeably faster (63x in Chrome, 3x in Firefox):
Uhm, this test is also hairy... doesn't run on Opera for some reason.
I was mainly comparing the __defineGetter__
approach vs having a interval running. Uhm... for Vector stuff performance is definitely important, but considering this is something called once per frame, I think I would bet on the usability/compatibility side.
Updated the jsperf: http://jsperf.com/getter
So I don't know, maybe setInterval at 60fps for this isn't that bad? WebGL is capped at 60fps anyway.
With setInterval
at 60 fps I'm also a bit worried about some subtle synchronization issues.
Frames we get from requestAnimationFrame
are not going to be the same as frames we get from setInterval
.
Maybe Timer
could be run at 120 fps (not sure if it's not anyway capped at something smaller)?
Hmmm, as a sidenote, I just realized your example of Maya animation is then played back less smooth than it could be. Even if authored at just 25 fps, we can play it much smoother by interpolating between frames.
That's what we have been doing before, in my Blender exported animations I intentionally skipped many frames to save JSON file size (just 11 frames for 2 second loop, looked practically the same as much denser original sampling).
Not sure if this is any useful, for my particle engine, it runs an internal loop with setTimeout(); This allows the engine to run independently of framerate while giving consistant results
https://github.com/zz85/sparks.js/blob/master/Sparks.js#L61
This method was also described in http://gameclosure.com/2011/04/11/deterministic-delta-tee-in-js-games/
@alteredq
With regards to interpolated frames, it is important to note that Maya itself only plays back animation on each frame. It does not do interpolated between frames.
On the other hand, MotionBuilder does interpolate between frames on playback. Even scrubbing the timeline you can smoothly move between frames and see sub-frame animation. But with Maya, playback and timeline scrubbing only plays on-frame keyframes in order to preserve accurate keyframe animation.
Anyways, while it is nice to play back an animation using interpolated sub-frame keyframe values in order for it look smoother, a lot of the time it is more important to maintain accurate playback. If subframe keyframe interpolation was to be implemented I would hope that it would be 100% optional.
Hmmm, as a sidenote, I just realized your example of Maya animation is then played back less smooth than it could be. Even if authored at just 25 fps, we can play it much smoother by interpolating between frames.
That's what we have been doing before, in my Blender exported animations I intentionally skipped many frames to save JSON file size (just 11 frames for 2 second loop, looked practically the same as much denser original sampling).
Yep, I know, but in this case the animator wanted it like this :)
Anyways, while it is nice to play back an animation using interpolated sub-frame keyframe values in order for it look smoother, a lot of the time it is more important to maintain accurate playback. If subframe keyframe interpolation was to be implemented I would hope that it would be 100% optional.
Totally, currently that code sits on the application level. It's up to you how you display the morphtargets.
@zz85 Feels a bit weird - response to getting larger deltas is to run even more computations? Wouldn't this make everything slow down with time?
I would probably just clamp deltas (in "good times" you get accurate simulation, in "bad times" you basically degenerate to fixed step simulation).
@Jakobud Good point. So far all examples used interpolation, but there could be some use cases where it could create problems.
@mrdoob Meanwhile I tried to extract this application level code into MorphAnimMesh
class. Having parameter for control of interpolation could be a feature.
https://github.com/alteredq/three.js/blob/experimental_shading/src/objects/MorphAnimMesh.js
@mrdoob Meanwhile I tried to extract this application level code into
MorphAnimMesh
class. Having parameter for control of interpolation could be a feature.https://github.com/alteredq/three.js/blob/experimental_shading/src/objects/MorphAnimMesh.js
That's interesting. It may be a bit too much logic though, I think I would just pass a 0-1 value and a interpolation boolean updateAnimation( progress, interpolation )
?
I had troubles getting mirrored looping going on (forward-backward-forward...), that's where most of logic comes from.
It's for sitting dude here (original animation is not loopable):
http://alteredqualia.com/three/examples/webgl_shadowmap_particles.html
@alteredq that's for accurate frame dropping. the bottleneck is usually at rendering, which is why requestAnimationFrame
comes into play. eg game engine easily runs at 10hz = 100x/s but display can refresh at 60fps or 15fps. but yes, you get into trouble if game loop is too slow.
you can just use delta for simple interpolation or verlet integration, but euler's integration will get greater accumulated error of margin if delta's are too big over time.
@alteredq Think of the possibility of a editor with a timeline that connects with three.js and wants to update the animation while the user is scrubbing the timeline. For such application, doing the 0 to 1 calculation to send the method is far easier than coming up with the right delta ;) Same thing with the pingpong logic.
@mrdoob i imagine a flash / after effects like keyframing and tweening timeline in three.js :-)
@zz85 Wasn't for Sparks
bottleneck the simulation? With WebGL you can render million particles, but JS simulation starts to choke with tens of thousands.
@mrdoob Deltas are used everywhere else, so it was kinda natural, I didn't even think of use cases other than just vanilla playback. Progress based control would be cool, it would allow for example to use your beloved tweens for some fancy motion distortions ;).
@alteredq Yayay! ^^
@alteredq not really. i feel that the bottleneck is pushing the buffers from CPU to the GPU every time you need to render. while JS is slow, simulating a million particles in still is faster than rendering a million particles to screen, so where it chokes is still requestAnimationFrame(), unless the pipeline is all in the gpu, imho.
@zz85 Did some tests with million particles (not Sparks
, just move things around in JS using dirtyVertices
and position.x += 1
):
So yes, sending stuff to GPU is expensive, but doing lots of even trivial things in JS can be surprisingly costly (for some reason this triggers massive garbage collection, I already encountered such behaviors before, you don't need to allocate anything, if you do lots of something, it'll cause garbage collection).
The GC is probably from the fact that you use 1 instead of a variable to update the position.
@gero3 Using simple numerical constant IMHO shouldn't create new things. Also, it's the same GC overkill if you do it like this:
var n = 1;
// loop over million items
position.x += n;
Or like this:
// loop over million items
position.x = Math.random();
Anyways, JS engines work in a mysterious way.
I keep digging at particles and I discovered one massive performance bottleneck. It's kinda crazy, I got almost double the performance by changing the code that doesn't even get executed (in a particular run, it's behind if
that is never true for that test case).
Must be some V8 magic optimization thing - I guess some code patterns may prevent V8 from applying more aggressive optimizations.
actually, I meant moving the variable outside the function scope. I've had problems with numerical constants too for GC.
So far, I've followed the tutorial that suggested to call requestAnimationFrame() and render in a loop.
function render() {
requestAnimationFrame( render );
renderer.render( scene, camera );
}
render();
This bug caused my page to hog 1-2 CPU cores forever, even if nothing at all is happening in the scene, as long as my webpage was open, for days. It drains battery, uses electricity needlessly, and spins up fans. Thus, causing noise pollution and indirectly environmental pollution. Needless to say, that's a (page) killer. I was so frustrated, I was seriously trying to rewrite everything using another 3D library. (SceneJS doesn't seem to have the same problem.)
Finally, I realized that my scene only changes on user input. Therefore, I made my call to requestAnimationFrame() conditional on whether there was a user input in the last 2 seconds. If not, I would go into standby mode and not call requestAnimationFrame(). As soon as there's new action, I call render() again.
Here's my code:
function render(time) {
TWEEN.update(time);
renderer.render(scene, camera);
if (gLastMove + kStandbyAfter < Date.now()) {
gRunning = false;
} else {
gRunning = true;
requestAnimationFrame(render);
}
}
var gLastMove = Date.now();
var gRunning = true;
var kStandbyAfter = 2000; // ms
function requestRender() {
gLastMove = Date.now();
if ( !gRunning) {
requestAnimationFrame(render);
}
}
window.addEventListener("mousemove", requestRender, false);
window.addEventListener("keydown", requestRender, false);
Super-ugly. But this solved my problem. The CPU usage drops from 2 cores with 100% each to almost nothing, after 2 seconds of no activity on my page.
Your situation may wary, you may other factors other than user input that cause scene changes for you. But maybe it's a lot less than every 16ms, and more importantly, there may be phases of complete inactivity when you can turn things off entirely until some trigger. (In my case: mouse move)
I hope this helps. I still consider this to be a band-aid and ugly, and I think this should be fixed in ThreeJS. If nothing else, to save all our CPU cores on all the pages that use ThreeJS.
Using this code makes my FPS drop to a slideshow:
setInterval( function () {
requestAnimationFrame(animate);
}, 1000 / 60 );
When I just do requestAnimationFrame(animate)
by itself with no timer, everything is smooth again. What could be causing that? It seems like my performance should improve with setInterval
since presumably animate
would be called less. I actually just created an Electron app using the basic cube example and it still happens.
Well, I just added Stats to my app so that I could see what the frame rate drops to. Turns out it doesn't drop. It leaps to ~7,500 FPS! Now I'm really confused.
Edit: This solution worked for me: https://stackoverflow.com/a/19772220/996314
If I have an animation that I want to play at a certain FPS on the users screen, how would I accomplish this? I've seen some people do this:
Is that the proper way to do it?
What happens if someone's computer is too slow to handle the geometry in the scene? Is there someway to do frame skipping in order to keep their framerate at the target rate but skipping certain frames in order to stay there?