blackbeam / rust-mysql-simple

Mysql client library implemented in rust.
Apache License 2.0
658 stars 144 forks source link

Very high CPU usage when accessing BufferPool #301

Closed pliard-b closed 2 years ago

pliard-b commented 2 years ago

We have a Rust service that has thousands of threads using their own mysql::Conn to multiple databases. We noticed up to 10x CPU usage increases when going from mysql v20 to v21. The regression appears to be caused by contention on the added global buffer pool and the implementation of std::sync::Mutex involving spinlocks (see below). FYI we are using jemalloc and the Vec allocations that the usage of BufferPool replaced didn't appear to be a bottleneck for us.

screenshot

Here are some suggestions that I can think of for how this can be addressed: 1) Revert to allocating Vec's on demand

2) Make each mysq::Conn object have its own buffer

3) Have BufferPool use a lock-free pool data structure such as what the pool crate provides:

In addition to these a hybrid approach could be used where a new flag controls whether 1-2) or 3) are used.

blackbeam commented 2 years ago

Hi. This might be addressed in blackbeam/mysql_async#170. I'm planning to port this for the next release.

Also I think that a flag that turns of the pool is the best option for your case, so I'll add it.

pliard-b commented 2 years ago

Thanks a lot @blackbeam for fixing this. We will give the pool another try with the crossbeam ArrayQueue. Disabling the pool through the environment variable seems a bit fragile though. Perhaps this could be done in the future via a cargo feature?

blackbeam commented 2 years ago

Disabling the pool through the environment variable seems a bit fragile though. Perhaps this could be done in the future via a cargo feature?

Agree. This is now addressed and will be in v22.0.0 release.

pliard-b commented 2 years ago

Disabling the pool through the environment variable seems a bit fragile though. Perhaps this could be done in the future via a cargo feature?

Agree. This is now addressed and will be in v22.0.0 release.

Great, thank you!