elastic-rs / elastic-rotor

A fast, experimental async Elasticsearch REST client in Rust
0 stars 0 forks source link

Verify pipelining is happening #1

Open KodrAus opened 7 years ago

KodrAus commented 7 years ago

So rotor should be able to pipeline requests over a single connection. I'm assuming by just spinning off a new request with the same connection, but I'm not really sure.

This needs to be measured, maybe using a tool like clumsy on Windows (there's probably a nix alternative, but I'm not aware of it) to slow the requests right down and timing when they start vs complete.

KodrAus commented 7 years ago

So it doesn't seem to be happening, and I'm guessing this is for a couple of reasons:

  1. futures::collect seems to be blocking on each future. I suspect this is because Client doesn't implement Clone, so it needs to be moved and passed to one closure at a time. I'm not sure if it's that smart about it or not though...
  2. rotor_http isn't waking up the state machine when there's an active request in progress. Either because it doesn't actually support pipelining yet, or because that bit of wakeup just isn't implemented.

I'll check these both out.

KodrAus commented 7 years ago

Ok so 1 is defunct. futures::collect will poll one at a time, which isn't an issue if the futures are already executing on another thread somewhere.

So for concurrency, spin up the futures (which in this case is putting a message on a queue). Then poll the futures that already exist.

2 is correct, there isn't actually pipelining going on through wakeups because the machine isn't being woken up while in progress. I might have a look at this or just look at prototyping a client with tokio and see what edges that produces.

KodrAus commented 7 years ago

Here's a neat idea: Use the futures::stream::channel to send requests to our connection pool.

We can then either handle them directly through the stream, or stick them on a queue that a bunch of connections can fight over. The queue would need to participate in back pressure.