tokio-rs / tokio

A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...
https://tokio.rs
MIT License
25.42k stars 2.3k forks source link

Reduce contention in timer driver #6504

Open Darksonn opened 2 months ago

Darksonn commented 2 months ago

I have often heard reports about performance issues due to large amounts of timers. Specifically, this is due to contention on the mutex in the timer driver.

Some ideas:

Before spending a lot of time on a large change, please check with us beforehand by posting here or in Discord so that we can discuss whether your implementation idea is what we want.

PRs related to this issue:

conradludgate commented 2 months ago

I think sharding could go a long way with relatively little code. On TimerEntry creation, it acquires a random shard index, modulo the number of configured shards on the runtime. DashMap shards by default with 4*num_cpus(). I think starting with the number of worker threads would go a long way on it's own though in reducing the lock contention. Making it later configurable will help for any such corner cases in future.

On second thoughts, driving each shard adds more complexity...

wathenjiang commented 2 months ago

Hi. I would like to provide a possible solution here.

I think for most web applications(or most applications), perhaps canceling timeouts before they are completed is the main source of performance issues.

(1) Background

In production environments, servers that handle network data requests often set timeouts for connections or requests.

See https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/

However, multi-thread concurrent timeout operations may be affected by lock contention performance.

And it because the reason some web server use it own timeout crate instead the timeout in tokio. See https://github.com/cloudflare/pingora/tree/main/pingora-timeout

(2) Solution

We can assign a TimeHandle or Inner in time/mod.rs to each worker thread. Then we move multiple TimeHandles from driver:: Handle to shared: worker:: Shared, and each worker thread is assigned a TimeHandle in the manner of remote: Box<[Remote]>.

The details here can be further discussed and considered, and this is only a preliminary idea :)

Yes, we want to make the timer thread-local for every worker threadl, for reducing the lock contention.

Since sleep/timeout must be executed in the worker thread, each sleep and timeout can find its thread-local timer.

(3) Performance improvement

use std::time::Duration;
use tokio::time::timeout;

#[tokio::main]
async fn main() {

    let fut = async { 1 }; // a vecry quick async task, but might timeout
    let _to = timeout(Duration::from_secs(1), fut);
}
conradludgate commented 2 months ago

Since sleep/timeout must be executed in the worker thread, each sleep and timeout can find its thread-local timer.

This is not true. You can spawn a separate thread for the tokio runtime and register/poll the timeout on a different thread by using Handle::enter() on that thread.

This is how https://github.com/smol-rs/async-compat works

wathenjiang commented 2 months ago

Yes, your view is correct. More precisely, sleep and timeout require the handle in the TLS.

wathenjiang commented 2 months ago

I think we can achieve our goal this way:

The advantage of doing so is that there is no need to introduce additional complexity of sharding.

Always using Instant for sharding is another solution I came up with, but due to the loss of some thread-local, the cost of lock contention might be higher. I prefer the first option, but the second option is also acceptable.

Darksonn commented 2 months ago

Here's what I'm thinking. We shard the timer into several shards. When timers are created from within the runtime, they register with whichever timer shard is associated with the current runtime thread. Timers from outside the runtime pick a timer shard randomly.

When a timer is cancelled before completion, it must lock the shard it was registered with. That may or may not be the one on the current thread, depending on whether it has been moved across threads. It could be worth asking someone with a running production system to instrument their timers to see how often that happens.

Anyway, when we poll the IO/timer driver, it's generally done on just one thread. To start with, I don't think we should try to change that. The driver can just lock all shards one after the other, get the minimum timer from each one, and use that for its sleep duration.

I'd like to avoid locking the timer driver mutex in drop of Sleep when not necessary. For example, if the Sleep has not yet been polled, or if it has completed, then I think we can probably avoid taking the mutex.

None of this should happen in one PR. I think these changes can be done incrementally.

wathenjiang commented 2 months ago

I would like to solove this issue by some PRs incrementally. It might start with reducing the unnecessary lock contention most timeout are cancelled before they complete (often the case for timeouts).

Darksonn commented 2 months ago

That's great! Specifically, what change do you intend to start with in the first PR?

wathenjiang commented 2 months ago

Hi. I think the most pressing issue currently is that even if timeout has never been registered, locks will still be used in cancel. This will be my first change.

Darksonn commented 2 months ago

I agree. That would also be my first change. Go for it. :)

conradludgate commented 2 months ago

I tried removing the lock call on drop, but it wasn't super trivial due to the happens-before nature. Loom was not happy. The not-yet-polled state can be hardcoded. The deregistered state could perhaps be set when sleep returns Ready, I'm not sure if that will solve the loom problems though since that also only reads the state deregistered atomic.

carllerche commented 2 months ago

Here's what I'm thinking. We shard the timer into several shards. When timers are created from within the runtime, they register with whichever timer shard is associated with the current runtime thread. Timers from outside the runtime pick a timer shard randomly.

When a timer is cancelled before completion, it must lock the shard it was registered with. That may or may not be the one on the current thread, depending on whether it has been moved across threads. It could be worth asking someone with a running production system to instrument their timers to see how often that happens.

Anyway, when we poll the IO/timer driver, it's generally done on just one thread. To start with, I don't think we should try to change that. The driver can just lock all shards one after the other, get the minimum timer from each one, and use that for its sleep duration.

I'd like to avoid locking the timer driver mutex in drop of Sleep when not necessary. For example, if the Sleep has not yet been polled, or if it has completed, then I think we can probably avoid taking the mutex.

None of this should happen in one PR. I think these changes can be done incrementally.

This strategy sounds good to me.