Open Luminoth opened 3 years ago
https://github.com/Luminoth/nature-of-code/tree/main/ecosystem is my more complex example if that's of use
This absolutely looks like a bug of some flavor to me, and worth examining in depth.
I'd worked with @YohDeadfall on related investigations in #1751 where the problem was simple user confusion but this suggests that there may be something more deeply wrong.
If it's of any help sorting out what's going on, in that more complex example I linked this is the setup (I believe even with #1839 the run criteria here takes precedence over the state) :
...
.add_system_set(
// fixed (physics) update
SystemSet::on_update(GameState::Game)
.with_run_criteria(FixedTimestep::step(PHYSICS_STEP as f64))
....
.with_system(
physics_update
.system()
.label(Physics)
.label(PhysicsSystem::Update),
)
...
however I do add another system set on the same state and I'm not sure how legit that actually is:
...
.add_system_set(
// per-frame update
SystemSet::on_update(GameState::Game)
...
That physics_update system is just this:
pub fn physics_update(time: Res<Time>, mut query: Query<(&mut Transform, &mut Rigidbody)>) {
for (mut transform, mut rigidbody) in query.iter_mut() {
rigidbody.update(&mut transform, time.delta_seconds());
}
}
and that rigidbody.update() is basically the same as the simpler example:
pub const PHYSICS_STEP: f32 = 0.02;
const PHYSICS_STEP_MIN: f32 = PHYSICS_STEP / 4.0;
const PHYSICS_STEP_MAX: f32 = PHYSICS_STEP * 4.0;
...
#[tracing::instrument]
pub fn update(&mut self, transform: &mut Transform, mut dt: f32) {
// during the first few seconds of runtime the dt can be wildly large,
// so to prevent weird things happening just treat them as single steps
if dt < PHYSICS_STEP_MIN || dt > PHYSICS_STEP_MAX {
info!("unexpected physics step, expected {}", PHYSICS_STEP);
dt = PHYSICS_STEP;
}
...
}
Running that with --release is a constant spam of this:
Apr 29 07:51:05.709 INFO update{self=Rigidbody { acceleration: Vec3(-24.434568, 113.427795, 0.0), velocity: Vec3(1.8647287, 7.5533237, 0.0), mass: 1500.0, drag: 0.03 } transform=Transform { translation: Vec3(-383.5281, 263.0, 0.0), rotation: Quat(0.0, 0.0, 0.0, 1.0), scale: Vec3(1.0, 1.0, 1.0) } dt=0.000909554}: ecosystem::components::physics: unexpected physics step, expected 0.02
Apr 29 07:51:05.709 INFO update{self=Rigidbody { acceleration: Vec3(7.3920627, -77.16837, 0.0), velocity: Vec3(3.5670257, -9.7047415, 0.0), mass: 1500.0, drag: 0.03 } transform=Transform { translation: Vec3(-479.82996, -78.321945, 0.0), rotation: Quat(0.0, 0.0, 0.0, 1.0), scale: Vec3(1.0, 1.0, 1.0) } dt=0.000909554}: ecosystem::components::physics: unexpected physics step, expected 0.02
Apr 29 07:51:05.709 INFO update{self=Rigidbody { acceleration: Vec3(61.192356, 60.715584, 0.0), velocity: Vec3(-75.69085, 126.85722, 0.0), mass: 15.0, drag: 0.04 } transform=Transform { translation: Vec3(-393.98312, 273.0, 20.0), rotation: Quat(0.0, 0.0, 0.0, 1.0), scale: Vec3(1.0, 1.0, 1.0) } dt=0.000909554}: ecosystem::components::physics: unexpected physics step, expected 0.02
(running without --release I only get the large dt dump during startup)
Oh, this may be an issue of your run criteria not combining like you're expecting. What happens if you replace your fixed time step with a run criteria that always returns ShouldRun::No?
It does what you'd expect, all of the systems under that criteria don't run:
fn never_run() -> bevy::ecs::schedule::ShouldRun {
bevy::ecs::schedule::ShouldRun::No
}
...
.add_system_set(
// fixed (physics) update
SystemSet::on_update(GameState::Game)
//.with_run_criteria(FixedTimestep::step(PHYSICS_STEP as f64))
.with_run_criteria(never_run.system())
...
All of the systems not under it continue to run like normal.
I think there is just some confusion about what Time is measuring. Time is calculated once per frame and contains the "real" delta time since the last frame started. A "fixed timestep" system runs zero-to-many times each frame based on the current Time::delta() and (edit: and previously accumulated time) (and the original value of Time::delta() won't change throughout this process). When a "fixed timestep" system runs, that means the run criteria has accumulated enough "real delta time" to run the fixed timestep system. Therefore fixed timestep systems should use whatever constant timestep was passed in to the run criteria for their delta time calculations (ex: PHYSICS_STEP), not Time:delta().
There might be any number of different "fixed timestep" systems running in parallel with different step sizes, so updating the global Time::delta() isn't really feasible here (and doing so would change the context of that value, in a confusing way).
So in short, your fixed timestep systems should already be running at the correct rate and Time shouldn't be used because it is the "frame time".
That being said, I agree that this goes against the intuition other engines have set up:
impl Update for MyGameObject {
// delta is the "frame time"
fn update(delta: Duration) {}
// delta is the "fixed update" timestep (often runs as part of the physics loop)
fn fixed_update(delta: Duration) {}
}
Its worth discussing ways to prevent people from making these assumptions (either via api changes or doc changes).
That was actually going to be my next question, whether I was ok to just always use my PHYSICS_STEP and ignore the delta_seconds. Doing that was giving me the correct simulation results so it's good to know all of that is lining up. Coming from Unity I'm very much used to having Time.fixedDeltaTime, so that was the habit I reached for. It makes perfect sense tho why that paradigm is more complicated here. I appreciate the explanation!
I ran into this issue recently, and I'd suggest having the Res<Time>
resource be slightly more sophisticated
it could be something like a Res<Collection<Time>>
where to get a particular Time
from Collection<Time>
you need to provide a system_set
label or something. The idea being that there is exactly one Time
object associated with each FixedTimestep
system set, and an additional Time
instance which represents the current behavior.
The current behavior is certainly surprising, and I think it's worth designing an API for Res<Time>
that is difficult to misuse. ie, it's clear that when you use the default Time
in a FixedTimestep
it will have not be tracking delta_seconds
for the FixedTimestep
Yup I agree that we need to make this a bit more intuitive / harder-to-misuse. A global "Time::fixed_delta" would eliminate this problem, but would also complicate "multiple differing FixedTimestep run criteria" a little bit. But that is admittedly a pretty big corner case. I think we can find a solution that enables both scenarios, but gives the common "single fixed timestep" pattern an "ideal" api.
So there's no fundamental difference in the way Bevy's FixedTimestep
works compared to Unity's. It's not intended to be a fixed interval (super common misunderstanding). It's a fixed integration step size. The delta time aspect is virtual. FixedTimestep
only guarantees that attached systems "run N times per second" on average, which isn't strictly equivalent to "run once every 1/N seconds." The latter will be violated whenever necessary to maintain the former.
I believe a Timer
is what most people are probably looking for when they want to run something on a real fixed interval.
Yup I agree that we need to make this a bit more intuitive / harder-to-misuse. A global "Time::fixed_delta" would eliminate this problem, but would also complicate "multiple differing FixedTimestep run criteria" a little bit. But that is admittedly a pretty big corner case. I think we can find a solution that enables both scenarios, but gives the common "single fixed timestep" pattern an "ideal" api.
Honestly, I don't think multiple timesteps is a valid pattern. It's doubtful if they can be interleaved/parallelized in a bug-free way. Two FixedTimestep
instances with different deltas would rarely be ShouldRun
in the same frame.
Having more than one timestep also introduces non-determinism because the instances don't know how they're supposed to be interleaved. If a user had timesteps A and B, where B's timestep was half of A's, and several steps were queued because of a long frame, a sequence that would normally go "A+B, B, A+B, B" would execute as "A+B, A+B, B, B." (A "tick timer" would be more appropriate for running systems at an integer multiple of the "main" fixed timestep).
(edit: I'm personally very much in favor of adding a global Time::fixed_delta
. I'd also suggest a global Time::dilation
that scales the frame time added to the timestep accumulator, since users have been asking for a way to step faster/slower on-demand. Like this: https://gist.github.com/maniwani/709cb2867eccd9b0eb34514c74cec4bb)
Bevy version
0.5
Operating system & version
Ubuntu 20.04
What you did
Running the following code on a debug build (
cargo run
), if I move the window around, I get dt values that are significantly less than the desired fixed timestep:Additionally, the fixed_update() method is called repeatedly on startup with larger dts (this does clear out after a second or so of runtime):
I haven't seen anything in the documentation so far that would lead me to expect either of these scenarios. I have a more complex example of this that I wish made for a simpler example in which my simulation is ok in a debug run but in a release run does effectively nothing because I constantly get dts that are significantly less than the expected step (similar to the dt values I get when I just move the window around in the simple example).
Is there something that I'm missing with the FixedTimestep that could get me back on track to having dt's closer to my required step size? Or is what I'm experiencing here actually the expected result (I imagine not consider what's happening in my more complex simulation using a release build, but I'm unsure).