Open rrousselGit opened 2 years ago
From my tests, changing FakeAsync.run
to have ZoneSpecification(scheduleMicrotask: null)
is enough to get the previous code working.
I don't see an obvious issue, so feel free to correct me.
I can make a PR for adding optional args to disable this, such as:
fakeAsync(callback, mockMicrotasks: false);
But I do think having this false
by default would be a significant user-experience improvement, and worth being a breaking change.
Microtasks are mostly unrelated to timer events. The only real relation is that microtasks scheduled by one timer event should be separate (and complete before) any other timer event or other top-level event.
Not mocking microtasks should be safe, as long as no other code expects to know when microtasks run. (Future result propagation doesn't always use the microtask queue, so it's like a third level of event propagation. There is no way to intercept that, so intercepting microtasks won't intercept all "events". Same for synchronous completers/controllers.)
(That said, I'm not entriely sure what fake_async is trying to fake. I sometimes see code checking whether the microtask queue is empty, and you can't do that without intercepting microtask. If it doesn't meddle with microtasks, may it is really a "mock_clock" instead?)
Flutter is using a workaround to handle the case. I found it here: https://github.com/flutter/flutter/issues/60675
/// Runs a callback using FakeAsync.run while continually pumping the
/// microtask queue. This avoids a deadlock when tests `await` a Future
/// which queues a microtask that will not be processed unless the queue
/// is flushed.
Future<T> _runFakeAsync<T>(Future<T> Function(FakeAsync time) f) async {
return FakeAsync().run((FakeAsync time) async {
bool pump = true;
final Future<T> future = f(time).whenComplete(() => pump = false);
while (pump) {
time.flushMicrotasks();
}
return future;
});
}
I'm seeing the same thing. I've tried a few things, but still no luck.
This test passes, but shouldn't.
test('fake async', () async {
return FakeAsync()
..run<void>((async) async {
await Future<void>.delayed(const Duration(milliseconds: 1));
async.elapse(const Duration(milliseconds: 5000));
throw Exception('this test should fail');
});
});
this test (which includes the workaround referenced by @MattisBrizard) hangs forever on the Future.delayed
test('fake async', () async {
return _runFakeAsync<void>((async) async {
await Future<void>.delayed(const Duration(milliseconds: 1));
async.elapse(const Duration(milliseconds: 5000));
throw Exception('this test should fail');
});
});
This is what I think @rrousselGit is talking about when saying
changing
FakeAsync.run
to haveZoneSpecification(scheduleMicrotask: null)
is enough to get the previous code working.
but it also passes
test('fake async', () async {
return FakeAsync()
..run<void>((async) async {
await runZoned(
() async {
await Future<void>.delayed(const Duration(milliseconds: 1));
async.elapse(const Duration(milliseconds: 5000));
throw Exception('this test should fail');
},
zoneSpecification: const ZoneSpecification(scheduleMicrotask: null),
);
});
});
I was able to get a failure, as expected, using this:
test('fake async', () async {
return FakeAsync()
..run<void>((async) {
final completer = Completer<void>();
Future<void>.delayed(
const Duration(milliseconds: 1),
completer.complete,
);
async.elapse(const Duration(milliseconds: 5000));
expect(completer.isCompleted, true);
throw Exception('this test should fail');
});
});
but by changing the body to an async function, it returns a false positive again.
..run<void>((async) async {
I think that while the ability to mock microtasks can be useful, most use-cases don't really care about this. Instead, people likely use fake_async for Duration based APIs like clocks or timers
By not mocking microtasks, this could allow writing things like:
Unless I'm missing something important and mocking microtasks is necessary for proper Timer mock, I think this behavior would be more intuitive for users.