Closed tiif closed 1 month ago
Currently looking into this,
@rustbot claim
I know you are still working on epoll so thought I would just ask this here. Last month this code was added to epoll_create1. It calls Epoll:default() a second time in the function and then sets its ready_list to what I would think is the default anyway and then it lets the variable go out of scope without further use. Maybe a merge artifact? https://github.com/rust-lang/miri/blob/9efab211d308a85a0afa6af229b08c786bf84477/src/shims/unix/linux/epoll.rs#L210-L214
Apologies if I am misreading something. And for using this issue thread. This would not be the cause of a deadlock.
Huh yea, that looks like a rebase artifact, good find. the first two lines should just not exist anymore at all
Wow, nice catch!
Reproducible:
use tokio_util::task;
#[tokio::main]
async fn main() {
let pool = task::LocalPoolHandle::new(1);
pool.spawn_pinned(|| async {}).await;
pool.spawn_pinned(|| async {}).await;
}
Current progress:
As miri reported deadlock on both epoll
and futex
, this might be an issue with eitherepoll
, futex
, or both.
Also, when running this test with --many-seeds
, not every seeds reported deadlock. So @oli-obk and I are unsure if this could potentially be an actual deadlock.
@Darksonn do you have any idea on what could possibly happen in the reproducible?
I took a quick look but wasn't able to see the issue. But be aware that the test has two runtimes, so you have two epoll instances in that test. Not sure what effect that has.
The reproducer is helpful. But it can be simplified. The tokio function only needs to be called once, but it took about twice as many seeds before the deadlock was reported. In my build, a seed value of 9 triggered the problem. Found that, starting from 1, so eight had passed.
Nice! I think we can start minimizing then by manually inlining aggressively
Maybe this helps understand the failing code a little better. There are three threads being reported as deadlocked. I think this means there are three threads with no chance of making further progress in the simulator. Likely they aren't actually deadlocked with each other. But the design would have called for a clean exit on the main thread after the nop future was polled on the worker thread.
So it appears an event was lost or an event didn't get created.
Of the three threads, two are blocked on epoll, and one is blocked on a futex.
The two epolls being blocked on are actually in different tokio runtimes. There is the multi_thread tokio runtime/scheduler that was responsible for the futex blocked thread and the main thread. And there is a current_thread (aka single threaded tokio) scheduler that is also blocked on an epoll.
The output I'm looking at shows the three thread backtraces, I'll number then 1, 2 and 3, in the order they are printed.
1 - shows a thread name of "unnamed-2" and is the current_thread tokio blocked on epoll.
2 - shows no thread name, is blocked on a futex and condvar, and deeper down the stack shows is part of the tokio multi_thread runtime.
3 - shows a thread name of "tokio-runtime-w", and is the multi-thread runtime blocked on epoll.
I'm going to go out on a limb and guess that thread 1 above is the sole worker thread that was created for the pool and it is in its normal state of having nothing to do so it waits. The logic in Miri should have signaled that the work it was given has completed.
shows no thread name,
That means it is the main thread.
Miri doesn't know which thread is blocked "on which other thread", so when it reports a deadlock that means all threads are blocked on something, but the actual deadlocked cycle might not involve all the mentioned threads.
But maybe deadlock is a misnomer. I take deadlock to mean a mutual embrace, while a simulator like Miri is probably only reporting that there is no simulated thread able to make further progress. None is technically blocked waiting for another.
And to my naive eye, it is interesting that the main thread is reported blocked on the futex. I always thought that tokio would run its scheduler on the main thread, but probably that was just a bad assumption.
I think deadlock is the correct term. If no thread can make progress, that means there's a subset of threads that are in a dependency cycle.
The thread blocked on the futex is interesting.
I suspect it was started by the multi_thread scheduler to watch for when the worker thread wants to signal that it completed the work given to it.
If the futex/condvar is blocked, I think that points to either the worker thread never seeing the work it was supposed to do, or it did the work and signaled the completion, but the simulated futex didn't match the send with the receive.
I think the worker's work was received and done, as evidenced by other steps I took in narrowing down the possibilities. But that analysis also ran into new problems not worth mentioning here so everything here needs to be taken with a grain of salt.
Generally, each Tokio runtime will have one thread blocked on epoll, and all other threads blocked on a futex/condvar. In the case of the main thread, that's a block_on
call which is always in the futex/condvar category.
Maybe we need to do a log dumpl of all FFI calls and its args, as well as thread switches in a successful run and in a run that deadlocks and compare them. May have some insights.
One thing @tiif noticed was that epoll_wait was never invoked on a real system, even tho miri ends up with a deadlock waiting on such a call finishing
that epoll_wait was never invoked on a real system
I'm not convinced that this is really true. When the extra threads become idle, they would use epoll_wait to sleep. Of course, there's a possibility that the main thread causes the process to abort before that happens.
I'm not convinced that this is really true.
Thank you. I also took that as a red-herring. And I'm running into my own fair share of those. Glad I don't post something everytime I think a new interesting pattern has emerged. :)
By looking at the tokio-util code, I see where the current_thread runtime comes from. So that makes sense.
I don't know how to dump FFI calls, I'm presuming those are simulated FFI calls. But I would like to see how Futexes are simulated in Miri anyway so this is good.
This strace of a real execution does not contain a single epoll_wait: https://rust-lang.zulipchat.com/user_uploads/4715/MR22oZW2Mj8PVXHkh_XNk--8/log
We don't have a good way of logging emulated FFI calls. A full trace may be too noisy, so it may require some log filtering or modifying of miri to just log the interesting parts
The interplay of the code running in the worker that would use condvar to indicate the work is done and then presumably call notify_one, and the condvar loop running in the main thread which is waiting to see some shared state change is where I want to get further visibility. I'm still looking for the tokio call that does these two sides, and then I'll look at the Miri shim for those pieces. (I think shim is the right term but I could be wrong.)
The reproducer can be made even smaller. The tokio current_thread scheduler can be used for main and then the stack dumps show only two threads were active. Not sure this makes analyzing the problem easier though.
use tokio_util::task;
#[tokio::main(flavor = "current_thread")]
async fn main() {
let pool = task::LocalPoolHandle::new(1);
let _ = pool.spawn_pinned_by_idx(|| async {}, 0).await;
}
Just an update - I am making progress in understanding where the passing path diverges from the failing path in the "best" case. Best defined by the longest common path created by the Miri PRNG. Thank goodness for the INFO messages from the miri::machine module. I clearly see a difference in the miri shim use of the first futex. On the failing side, there is never a futex_wake called because the tokio code seems to have already thought there is no-one left on the other side of an unbounded mpsc channel (funny I'm not sure yet if the sender or the receiver was thought to be gone - I just see on the passing path that tokio::sync::maps::UnboundedSender code kicks in while on the failing side, std::rt::panic_count::count_is_zero kicks in). But even this, take with a grain of salt. I need to reproduce the results, and hopefully somehow reduce the diff noise that came before. I haven't looked at how Miri uses its PRNG from the seed - maybe there is already a way to do this, but it would be nice if there were a second level of seed that could be kicked in after 'n' calls to the first PRNG - 'n' could then be determined manually with another binary-search. So when either a good seed or a bad seed was found, one could play with when to diverge down a new path. As it is, I have a passing seed of 167 and a failing seed of 178 that show remarkable congruency for well over a quarter of the extensive trace outputs (33,000 some lines are the same). But before interesting things happen about lines 105,000, lots of small and medium differences - and the significance of those differences ahead of the large divergence I haven't figured out. I also have ideas of reducing the noise by running tokio with features features enabled and even hard-wiring some cfg macros like for coop and MetricsBatch, that one wouldn't expect to affect the final results, but because of their use of the cell module, certainly cause a lot of variation possibilities for Miri from seed to seed.
If someone finds a quicker solution, I won't mind but this investigation is certainly fun so far.
It sounds like you want exactly what loom does, just under MIRI :)
I think a seed fuel for breaking the rng after n rng invocations is a great idea. All we'd need to do is track the number of RNG invocations and interleave a single rand bool after N has been reached.
@Darksonn Question, does the "net" feature on tokio and on tokio-util have anything in common?
In my new try at reducing the tracing output, I started to run with less than "full" features for tokio and tokio-util and to my surprise, found lots of ways to get the failing test for epoll to pass. I wouldn't be surprised if without "net", there is no epoll, but then how does the main, mentioned above, even run?
Anyway after binary searches on the features for the two crates, I found adding "net" to either dependency caused the failure path.
So in case it helps anyone figure this out faster, running with a Cargo.toml where tokio just uses "rt-multi-thread" and "macros", and tokio-util just uses "rt", I didn't see that test fail, with a hundred different seeds, while usually about one in five would fail. With "net" added to either, it would fail basically immediately. So big difference.
Again, I wouldn't be surprised that without "net" on one or the other, I'm not testing what I want to be testing. But I have to go out for a while.
Later, I can modify a local tokio to see which branches from the "net" feature is significant to this.
I observed that adding certain dependencies will invoke epoll
, which is pretty surprising to me too. For example in
https://github.com/rust-lang/miri/issues/3858#issuecomment-2338449702, the ICE will only be triggered only after adding test-util
.
Also, thanks for helping to narrow down the cause. I will keep following this thread while trying different stuff on my side.
Well, okay. The "net" feature to tokio-util passes "net" to tokio. Should have looked at its Cargo.toml before sounding like I didn't know how to figure anything! So I'll focus on what aspect of "net" to tokio makes the difference.
A bit more narrowing. The tokio/net feature causes feature mio/os-poll to be pulled in. I've gone to running the test with the current_thread runtime as that only brings two threads into play instead of three and still displays the failure about 20% of the seeded tests - so fewer differences to compare.
I've narrowed it down to the PRNG in the miri concurrency::weak_memory module. Interesting that it is not the PRNG responsible for the thread switching or any of the other ten places a PRNG is used by miri.
Here are some notes I made along the way.
Three things: after further refining of the output: I can say
1. which Miri PRNG function is hitting the error, the rng used by fetch_store in the Miri
concurrency::weak_memory module,
2. the last commonality between the failing and passing main threads is evident,
3. and the initial differences on the unnamed-1 thread is evident.
The TL;DR
If someone wants to peruse and throw out suggestions how to follow ideas that either Miri is
doing something wrong or Tokio is doing something wrong, I'm all ears.
1. Think I've narrowed down where the PRNG affects the weak memory, in a fetch_store:
(Miri code)
...
let chosen = candidates.choose(rng).expect("store buffer cannot be empty");
if std::ptr::eq(chosen, self.buffer.back().expect("store buffer cannot be empty")) {
(chosen, LoadRecency::Latest)
} else {
(chosen, LoadRecency::Outdated)
}
If I change the Miri code to always return the first element of the candidates, the failures go away.
2. Main thread details:
At the end of the failed main thread:
INFO miri::machine Continuing in mio::sys::unix::selector::epoll::Selector::select
INFO miri::concurrency::thread ---------- Now executing on thread `main` (previous: `unnamed-1`) ----------------------------------------
[src/shims/unix/linux/epoll.rs:451:9] EpollInterestTable::epoll_wait
[src/shims/unix/linux/epoll.rs:134:9] Epoll::get_ready_list
And the main thread has nothing left to do, and then the deadcode error is reported.
While on a passing main thread, the tracing continues from there:
INFO miri::concurrency::thread ---------- Now executing on thread `main` (previous: `unnamed-1`) ----------------------------------------
[src/shims/unix/linux/epoll.rs:657:5] blocking_epoll_callback
[src/shims/unix/linux/epoll.rs:134:9] Epoll::get_ready_list
[src/shims/unix/linux/epoll.rs:605:5] ready_list_next
[src/shims/unix/linux/epoll.rs:181:9] EpollInterestTable::get_epoll_interest
[src/shims/unix/linux/epoll.rs:605:5] ready_list_next
3. Unnamed-1 thread details (the thread created for the pool):
There are a good number of differences between the passing version and the failing version.
I'll just list the last line that the two versions would have in common with the hope that
someone can point me to what to look at further in Miri and Tokio.
Last common line before a difference:
INFO miri::machine Continuing in tokio::runtime::scheduler::inject::Inject::<std::sync::Arc<tokio::runtime::scheduler::current_thread::Handle>>::pop
Then the paths merge again and here is the next last line before a difference:
INFO miri::machine Continuing in tokio::runtime::scheduler::current_thread::CoreGuard::<'_>::block_on::<std::pin::Pin<&mut {async fn body of tokio::task::LocalSet::run_until<{async block@tokio_util::task::spawn_pinned::LocalWorkerHandle::run::{closure#0}}>()}>>::{closure#0}
Agsin (same description):
INFO miri::machine Continuing in tokio::runtime::scheduler::current_thread::CoreGuard::<'_>::block_on::<std::pin::Pin<&mut {async fn body of tokio::task::LocalSet::run_until<{async block@tokio_util::task::spawn_pinned::LocalWorkerHandle::run::{closure#0}}>()}>>::{closure#0}
And then finally things start to swing more wildly.
The two paths had merged again but then instead of a common line followed by lots of Miri trace on
one side and nothing on the other (where I take it the PRNG had kicked in),
now there is a difference of one trace line:
passing marked with a 'p', failing with an 'f':
INFO miri::machine Leaving std::cell::RefCell::<std::option::Option<std::boxed::Box<tokio::runtime::scheduler::current_thread::Core>>>::borrow_mut
p INFO miri::machine Continuing in tokio::runtime::scheduler::current_thread::Context::enter::<(), {closure@tokio::runtime::scheduler::current_thread::Context::run_task<(), {closure@tokio::runtime::scheduler::current_thread::CoreGuard<'_>::block_on<std::pin::Pin<&mut {async fn body of tokio::task::LocalSet::run_until<{async block@tokio_util::task::spawn_pinned::LocalWorkerHandle::run::{closure#0}}>()}>>::{closure#0}::{closure#1}}>::{closure#0}}>
f INFO miri::machine Continuing in tokio::runtime::scheduler::current_thread::Context::enter::<(), {closure@tokio::runtime::scheduler::current_thread::Context::park::{closure#1}}>
INFO miri::machine Leaving std::ptr::NonNull::<std::option::Option<std::boxed::Box<tokio::runtime::scheduler::current_thread::Core>>>::as_ptr
I've narrowed it down to the PRNG in the miri concurrency::weak_memory module
Interesting find!
You can disable the weak memory non-determinism with -Zmiri-disable-weak-memory-emulation
. Does the deadlock still reproduce with that?
If not, then the likely cause is that epoll fails to properly synchronize the vector clock of the woken-up thread with the event that caused the wakeup.
@RalfJung Yes, disabling the weak memory non-determinism with that flag allows the test to pass, with any seed I give it.
Looking for causes now.
Okay in that case it's almost certainly a lack of proper vector clock synchronization as part of the epoll-induced wakeup. You can look e.g. at the clock field in eventfd
to see how this must be done, though figuring out the details for epoll could be a bit tricky since it is a more complicated primitive.
This also means there is likely a way to write a program where Miri reports UB but it doesn't actually have UB. Something like
static mut
, then writes to the pipestatic mut
. (it never reads from the pipe!)This is basically the epoll version of this test.
So likely every interest list also needs to track a vector clock, and every time an event gets added to the interest list, the current threads' clock is joined into the interest list clock. And then when a wakeup bappens, the woken-up thread acquires the interest list clock.
@RalfJung Is there a way to add the most safe fence to the epoll code to see that the problem is cleared to show we would be going down the right path?
Fences on their own don't do synchronization, they only help in combination with a read-write pair relating two threads. But in epoll these reads/writes of the interest list are working on extra machine state, not regular memory. So no, just adding some fence won't help.
However, there is no doubt that returning from a blocked epoll should be an acquire operation (some thing happened in another thread, which woke up this thread -- there's a causal link here, and every such causal link should be reflected in the vector clocks). And currently I don't think there's any vector clock management in epoll. So even if we ignore this bug, this is something that must be fixed. And given the fact that the issue disappears with -Zmiri-disable-weak-memory-emulation
, I am fairly confident that that will also fix the deadlock.
@RalfJung That actually makes a lot of sense. Thanks for spelling out what probably seemed obvious to you. Let me see if I can follow the bread crumbs in eventfd and the test you laid out.
--Edit
So eventfd has a read and a write and they join their VClock to the thread, or from the thread, respectively.
Thanks again for all the help here!
So likely every interest list also needs to track a vector clock, and every time an event gets added to the interest list, the current threads' clock is joined into the interest list clock. And then when a wakeup bappens, the woken-up thread acquires the interest list clock.
A thread is woken up only if there is at least one EpollEventInstance
in the ready_list
. So would it be possible that you are talking about synchronise the ready_list
with the woken up thread?
How it works is (with a lot of details omitted):
EpollEventInstance
to the ready_list
Sorry, yes, I meant the ready list whenever I said "interest list".
There are two ready_list defined in the epoll module. Still wrapping my head around the fact there are two things holding the same type of BTreeMap. Is there really one but through cloning, the two types share it? If so, then it sounds like a new struct to wrap the BTreeMap, or essentially turn it into a tuple of BTreeMap and VClock.
They are the same thing. If an EpollEventInterest
is registered under an Epoll
instance, then it will inherit the same ready_list
.
As expected, I managed to produce a data race error (and sadly not a deadlock) with the test below in libc-epoll-blocking.rs
with ./miri run --dep ./tests/pass-dep/libc/libc-epoll-blocking.rs -Zmiri-preemption-rate=0
:
fn test_epoll_race() {
// Create an epoll instance.
let epfd = unsafe { libc::epoll_create1(0) };
assert_ne!(epfd, -1);
// Create an eventfd instances.
let flags = libc::EFD_NONBLOCK | libc::EFD_CLOEXEC;
let fd = unsafe { libc::eventfd(0, flags) };
// Register eventfd with the epoll instance.
let mut ev = libc::epoll_event { events: EPOLL_IN_OUT_ET, u64: fd as u64 };
let res = unsafe { libc::epoll_ctl(epfd, libc::EPOLL_CTL_ADD, fd, &mut ev) };
assert_eq!(res, 0);
static mut VAL: u8 = 0;
let thread1 = thread::spawn(move || {
// Write to the static mut variable.
unsafe { VAL = 1 };
// Write to the eventfd instance.
let sized_8_data: [u8; 8] = 1_u64.to_ne_bytes();
let res = unsafe { libc::write(fd, sized_8_data.as_ptr() as *const libc::c_void, 8) };
// read returns number of bytes has been read, which is always 8.
assert_eq!(res, 8);
});
thread::yield_now();
// epoll_wait for the event to happen.
let expected_event = u32::try_from(libc::EPOLLIN | libc::EPOLLOUT).unwrap();
let expected_value = u64::try_from(fd).unwrap();
check_epoll_wait::<8>(epfd, &[(expected_event, expected_value)], -1);
// Read from the static mut variable.
unsafe { assert_eq!(VAL, 1) };
thread1.join().unwrap();
}
I kind of hope I can reproduce the deadlock issue purely using syscalls to check whether it is fixed.
Should the VClock be per ready_list, or per entry in the ready_list, the EpollEventInterest? The latter is more fine grained I think and would only cause a clock sync from the thread resolving the event to the thread waiting for the event. Otherwise there seems like more clock syncing to the thread being woken then is called for.
Isn't the ready list already per-epoll-instance?
But I guess per event is indeed better, since a call to epoll_wait
may only consume some of the events in the list, and then only those should have their clock acquired.
Isn't the ready list already per-epoll-instance?
Yes.
I have a concern here, if we were to support level-triggered epoll in future, one event will be able to wake up more than one thread at once. I assume a clock can only be acquire once, so would this cause problem?
So for level-triggered epoll, something like this might happen:
What we currently support is edge-triggered epoll, and one event will only wake up one thread, but this doesn't apply to level-triggered epoll.
I assume a clock can only be acquire once
That assumption is wrong. :) It can be acquired any number of times.
@tiif Thank you for putting my latest confusion so clearly. I'm looking for where a fd event triggers the epoll mechanism, and wondering how the epoll event hooks up to the ready lists. Probably there is one global for all epoll events and that can be mapped back to the epolls?
I kind of hope I can reproduce the deadlock issue purely using syscalls to check whether it is fixed.
As far as I am concerned, the above is a better test. But if you want to keep looking I won't stop you. :)
That assumption is wrong. :) It can be acquired any number of times.
Nice to hear that! :D
I'm looking for where a fd event triggers the epoll mechanism
Whenever an event happened, check_and_update_readiness
will be invoked. From there, the event will be added to the ready list and do all the unblock thread stuff.
We have a global map that maps target file description to its associated EpollEventInterest
, so inside check_and_update_readiness
, we use that global map to retrieve all EpollEventInterest
that is related to target file description and update the corresponding ready_list
.
But I think my explanation is vague here, if you need more explanation you can always open a thread in zulip or ping me directly :)
@tiif Perfectly clear. I was going down that path but hadn't noticed the this.machine.epoll_interests. So I think the clock can be per ready list. I should be able to test this in a few minutes now but you may want to fix it all up to your liking anyway, I totally understand.
Just go for it :), It'd be nice if you can help to test the tokio reproducible above at the same time and make sure it can pass (but you don't need to add it into the PR).
Yes, the tokio reproducible is one of my goals. Just getting it to compile is satisfying but not that satisfying.
Description
When running one of the tokio test with
cargo +nightly miri test --features full --test spawn_pinned
, miri reported deadlock onepoll_wait
syscall. This is likely caused by epoll shim not receiving an epoll event notification to unblock the thread when it should have received one in real world system.Version
Rustc:
Tokio: Tokio repo master branch commit
27539ae3
Full trace
```rust Finished `test` profile [unoptimized + debuginfo] target(s) in 0.06s Running tests/spawn_pinned.rs (target/miri/x86_64-unknown-linux-gnu/debug/deps/spawn_pinned-5d9d1b31504e33c5) running 9 tests test callback_panic_does_not_kill_worker ... error: deadlock: the evaluated program deadlocked --> /home/byt/.cargo/registry/src/index.crates.io-6f17d22bba15001f/mio-1.0.2/src/sys/unix/selector/epoll.rs:56:9 | 56 | / syscall!(epoll_wait( 57 | | self.ep.as_raw_fd(), 58 | | events.as_mut_ptr(), 59 | | events.capacity() as i32, 60 | | timeout,d 61 | | )) | |__________^ the evaluated program deadlocked | = note: BACKTRACE on thread `unnamed-2`: = note: inside `mio::sys::unix::selector::Selector::select` at /home/byt/.cargo/registry/src/index.crates.io-6f17d22bba15001f/mio-1.0.2/src/sys/unix/mod.rs:8:48: 8:49 = note: inside `mio::poll::Poll::poll` at /home/byt/.cargo/registry/src/index.crates.io-6f17d22bba15001f/mio-1.0.2/src/poll.rs:435:9: 435:61 note: inside `tokio::runtime::io::driver::Driver::turn` --> /home/byt/Documents/tokio/tokio/src/runtime/io/driver.rs:149:15 | 149 | match self.poll.poll(events, max_wait) { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::io::driver::Driver::park` --> /home/byt/Documents/tokio/tokio/src/runtime/io/driver.rs:122:9 | 122 | self.turn(handle, None); | ^^^^^^^^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::signal::Driver::park` --> /home/byt/Documents/tokio/tokio/src/runtime/signal/mod.rs:92:9 | 92 | self.io.park(handle); | ^^^^^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::process::Driver::park` --> /home/byt/Documents/tokio/tokio/src/runtime/process.rs:32:9 | 32 | self.park.park(handle); | ^^^^^^^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::driver::IoStack::park` --> /home/byt/Documents/tokio/tokio/src/runtime/driver.rs:175:40 | 175 | IoStack::Enabled(v) => v.park(handle), | ^^^^^^^^^^^^^^ note: inside `tokio::runtime::time::Driver::park_internal` --> /home/byt/Documents/tokio/tokio/src/runtime/time/mod.rs:247:21 | 247 | self.park.park(rt_handle); | ^^^^^^^^^^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::time::Driver::park` --> /home/byt/Documents/tokio/tokio/src/runtime/time/mod.rs:173:9 | 173 | self.park_internal(handle, None); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::driver::TimeDriver::park` --> /home/byt/Documents/tokio/tokio/src/runtime/driver.rs:332:55 | 332 | TimeDriver::Enabled { driver, .. } => driver.park(handle), | ^^^^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::driver::Driver::park` --> /home/byt/Documents/tokio/tokio/src/runtime/driver.rs:71:9 | 71 | self.inner.park(handle); | ^^^^^^^^^^^^^^^^^^^^^^^ note: inside closure --> /home/byt/Documents/tokio/tokio/src/runtime/scheduler/current_thread/mod.rs:382:17 | 382 | driver.park(&handle.driver); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::scheduler::current_thread::Context::enter::<(), {closure@tokio::runtime::scheduler::current_thread::Context::park::{closure#1}}>` --> /home/byt/Documents/tokio/tokio/src/runtime/scheduler/current_thread/mod.rs:423:19 | 423 | let ret = f(); | ^^^ note: inside `tokio::runtime::scheduler::current_thread::Context::park` --> /home/byt/Documents/tokio/tokio/src/runtime/scheduler/current_thread/mod.rs:381:27 | 381 | let (c, ()) = self.enter(core, || { | ___________________________^ 382 | | driver.park(&handle.driver); 383 | | self.defer.wake(); 384 | | }); | |______________^ note: inside closure --> /home/byt/Documents/tokio/tokio/src/runtime/scheduler/current_thread/mod.rs:724:33 | 724 | ... context.park(core, handle) | ^^^^^^^^^^^^^^^^^^^^^^^^^^ note: inside closure --> /home/byt/Documents/tokio/tokio/src/runtime/scheduler/current_thread/mod.rs:774:68 | 774 | let (core, ret) = context::set_scheduler(&self.context, || f(core, context)); | ^^^^^^^^^^^^^^^^ note: inside `tokio::runtime::context::scoped::Scoped::