sinonjs / fake-timers

Fake setTimeout and friends (collectively known as "timers"). Useful in your JavaScript tests. Extracted from Sinon.JS
BSD 3-Clause "New" or "Revised" License
793 stars 103 forks source link

[Feature]: Speed up all fake timers #459

Closed Noriller closed 4 months ago

Noriller commented 1 year ago

16 was closed, but I feel it would be good to have something like this.

My problem is that many packages/component libraries use timers internally and you can't just fake and run. Especially in integration tests, where multiple things happen, I had tests breaking because I can't finish an action and run the timers (I also don't want to spam running timers everywhere)

I understand that this package is a dependency of jest and for frontend testing, I would like to "warp" the speed 100x~1000x, so microanimations that would normally take 200ms would take 2ms or less, greatly speeding up tests with minimal change (just adding the speed after using the fake timers) and the best part is without having to spam running timers or mocking animations frameworks (that sometimes are deeply nested in the component libraries).

I feel the existing config.shouldAdvanceTime is limited since you can only skip x ms by real x ms. Meanwhile "warp" would simply run everything faster without the throttling that advance time does.

SimenB commented 1 year ago

I don't really follow this. Why not use clock.runAll() or something? Or pass "warp" to tick, like clock.tick(16 * 200). Do you have an example of a test where this feature would allow it to run faster?

Noriller commented 1 year ago

The motivation is I'm using ant design at work and this is what their tests look like:

https://github.com/ant-design/ant-design/blob/4.x-stable/components/menu/__tests__/index.test.tsx https://github.com/ant-design/ant-design/blob/4.x-stable/components/select/__tests__/index.test.tsx

Basically, they sprinkle "runAll" all over, probably because of micro animations.

Those are only unit tests, and there are cases where they need to "runAll" between each new event.

I have the same problem at work, but in some cases, I would need to "runAll" between some events I shouldn't need to, or have to wait and have the tests running more slowly.

If I can have the timers running (like with shouldAdvanceTime but with each tick, it moved more than in real-time) then in many instances, all I would need to do was to let it run fake timers with the config and forget.

So, instead of forcing the timers to run, I could think about testing what I want and "compressing" the time it runs. (I like to think of: skipping parts in a video vs using a higher playback speed.)

SimenB commented 1 year ago

How would you rewrite those tests to use your suggested API? The tests seem reasonable to me. They mock time, so every time they schedule microtasks and/or timers (such as rendering) they run any pending ones immediately. Using "warp" wouldn't change anything at all from what I can tell

CreativeTechGuy commented 1 year ago

@SimenB I think what @Noriller is saying is to have the fake timers only be "partially" mocked. So currently, mocked timers are basically paused until they are manually advanced. It sounds like they want an option to have mocked timers which tick forward normally, but just tick far faster than normal time. So let's say that every 1ms in real time, the fake timers are advanced by 10ms. Currently the advanceTimeDelta setting is a 1:1 ratio. So we could have a advanceTimeRatio setting which says that every advanceTimeDelta which passes, how much should we multiply that by when advancing the mocked time. This would then be a global setting so you don't need to think about the fake timers within your test.

benjamingr commented 1 year ago

Isn't this just setInterval(() => clock.tick(10), 1) with a native setInterval?

CreativeTechGuy commented 1 year ago

@benjamingr would that work since setInterval is being mocked? That seems like a deadlock. The setInterval cannot progress since timers are mocked.

Also, this seems like an extension of the existing shouldAdvanceTime config so if that has justification to exist, this seems like it'd follow that same justification.

fatso83 commented 1 year ago

@CreativeTechGuy you just capture a reference before mocking it.

The GitHub app won't let me syntax highlight while on my phone, but essentially

nativeSetTimeout = window.setTimeout; lolex.install(); // I miss our old name 😿

You could just capture that as a little utility module to import.

Import spedUpAutoTick from './autotick'

and use that in tests.

CreativeTechGuy commented 1 year ago

I just hopped in to help clarify since I saw there was some confusion about @Noriller's original request. Just having a discussion to help reach a conclusion, I don't have much of a personal stake in this.

Something to consider is that I assume most of the users who interact with this library do so via Jest where it's a bit abstracted. Having to do the pattern you described @fatso83 will make Jest's config option fakeTimers: { enableGlobally: true } unusable. Where as there's already a global Jest fakeTimers.advanceTimers setting, a new setting for this would make sense too.

I think it's interesting that the line was drawn where it is. If shouldAdvanceTime didn't exist, we could be having this exact same conversation as everything we've talked about here applies to manually implementing that feature in user-land too. I wonder why that feature was added to the core library and then the line was drawn here where no additional options can be added to make that existing feature more flexible.

fatso83 commented 1 year ago

There's nothing preventing new features per se, it's just that adding every conceivable use case is quite unhealthy for a library, as it makes maintaining it harder. Learned that the hard way with the stubs API where 90% of the API is just sugaring on top of the little bits you need. So before adding yet another bell or whistle it should be clear that this is useful for people outside of a very specific niche. And if they can easily scratch their itch in some other way, then maybe that's ok. Often people don't know how these things work, so it's worth asking. That's why I think you made a good point about the Jest usecase. It does some weird stuff that sometimes end up with a leaky abstraction, which is hard to remember from sitting here in Mocha test land 😄

Noriller commented 1 year ago

Thank you, I understand. TBH I thought that something like this would fix some tests that were slower than they should be. Maybe I did something wrong, but I tried doing the changes inside the node_modules to check if I could actually make it run faster... it didn't.

I still think it's a cool idea, but right now I don't really have a use case for that.

For me, I understand if you want to close it (the PR #460 too). And if someone finds a use case for this then they should already have a good starting point for it.

stale[bot] commented 6 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

fatso83 commented 4 months ago

I appreciate all the effort that went into this issue and the PR, @Noriller , but I am trying to do some yearly cleanup on the project and I cannot see this moving anywhere, so I will close this for now. I am not vehemently opposed to this, as @CreativeTechGuy made good arguments for why one should allow these things, but I have yet to see any actual benefits.

If there a suite of test cases that was made substantially faster and/or simpler, I am fully open to reopening!