Open andresmargalef opened 4 years ago
Yes we've been actively looking into this.
I have some attempts, and currently ritsu can be used in combination with hyper. :D
https://github.com/quininer/ritsu/blob/master/ritsu-hyper/src/main.rs
@quininer I will try that code, have you seen some improvement in performance?
@andresmargalef I am currently not focus performance improvement.
Have any of you had a look at rio? It seems to be a safe approach to io_uring
rio
readme states:
use-after-free bugs are still possible without unsafe when using rio
AFAIK there is no way to build safe&sound API on top of io_uring without redesigning Tokio IO traits.
Also would add you can use Glommio which uses io_uring internally:
https://github.com/DataDog/glommio/blob/master/examples/hyper.rs
#577 So, does hyper support io_uring?
Tokio is working on io-uring here
Any web framework which supports io_uring?
Any web framework which supports io_uring?
Unlikely given that most frameworks are built on top of hyper. There is this https://github.com/actix/actix-web/issues/2404 but thats specifically for files 🤷
Any web framework which supports io_uring?
Unlikely given that most frameworks are built on top of hyper. There is this actix/actix-web#2404 but thats specifically for files 🤷
So, any next-generation web framework which will supports io_uring in the future?
just support monoio and then we dont have to keep on upgrading for speed. right? the world is complicated enough
I have a runtime that is based on io_uring: Heph, which is based on A10. I'm trying to port Hyper's client to it, but its I/O traits will not work with io_uring (without introducing an intermediary buffer).
Taking hyper::rt::Read
as an example, it's defined as below.
pub trait Read {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: ReadBufCursor<'_>
) -> Poll<Result<(), Error>>;
}
The problem is the lifetime in ReadBufCursor
. This is fine for readiness based I/O, where you try the operation and if it fails (e.g. would block) you hand back the buffer to the caller and try again later (after get a ready event).
With completion based I/O, such as io_uring, this doesn't work. With completion based I/O you pass a mutable reference to the buffer to the kernel which it keeps around until the read(2)
system call is complete. This means that if the read can't be completed within the call to Read::poll_read
, which is very likely, we still need to keep the mutable reference to the buffer, because the kernel still has the mutable reference to the buffer. Furthermore if the Future
/type owning the buffer is dropped before the read is complete we need to not deallocate the buffer, otherwise the kernel will write into memory we don't own any more.
For A10 the only solution to this problem I could come up with is that the Future
that represents the read(2)
call must have ownership of the buffer. If the Future
is dropped before completion we can defer the deallocation of the buffer (by leaking it or in the case of A10 by deallocating it once the kernel is done with the read(2)
call).
I'm not exactly sure how this would change Hyper's I/O traits though. I've been thinking about a an owned version of ReadBuf
, which I think would be the easiest solution and would solve the problem described above. However if we want to take it to the next level and use io_uring's buffer pool (IORING_REGISTER_PBUF_RING
) an owned version of ReadBuf
will not be sufficient either.
Maybe we can use something similar to the hyper::body::Body
trait where the I/O implementation can define its own buffer type. Ownership of the buffer will remain with the I/O type until the buffer is filled and Hyper can use it, which would solve the lifetime problem of ReadBufCursor
described above. Furthermore the buffer type can be defined by the I/O implementation so io_uring's buffer pool can be used, which would solve the second problem.
Long post, but I hope it highlights some of the major blockers of using Hyper with a completion based I/O implementation.
keep my eye on this~
@Thomasdezeeuw One alternative to the owned buffer could be to replace ReadBufCursor<'_>
with some variation of ReadBufCursor<'self>
(probably not the actual ReadBufCursor type). Then this ensures that the buffer stays available as long as you can poll the future. I'm not sure if io_uring has a straightforward way to deregister a target buffer when a future is dropped though.
this ensures that the buffer stays available as long as you can poll the future.
Except that's not long enough. The future can be dropped at any time, however the read operation might still be ongoing, which means the kernel still has mutable access to the buffer even after the future is dropped.
Either you have to do a synchronous cancel in the future's Drop
implementation, meaning having a blocking Drop
implement, or you need another solution. For A10 I chose to not block in the Drop
implementation and instead delay the deallocation of the buffer until after the kernel was finished with it. But to achieve this you need ownership of the buffer.
Except that's not long enough. The future can be dropped at any time, however the read operation might still be ongoing, which means the kernel still has mutable access to the buffer even after the future is dropped.
Either you have to do a synchronous cancel in the future's
Drop
implementation, meaning having a blockingDrop
implement...
And even that's not safe, right? The future might be leaked via mem::forget
, and then the Drop
implementation will never run.
And even that's not safe, right? The future might be leaked via
mem::forget
, and then theDrop
implementation will never run.
It's only safe if the buffer is heap (or not stack) allocated. Because then the buffer is leaked and the kernel still has safe mutable access.
I don't know if this is for hyper, tokio or mio but it is interesting to use the new io stack from linux 5.1 +. In async java world there is interest to start to support the new feature. Copy&paste that issue here