Open AnIrishDuck opened 9 years ago
This (as well as https://github.com/akka/akka/issues/18091) highlight that we did not document pre-fetching enough I guess. The fact is, every stage in Akka Streams does pre-fetching as related to it's buffer size. You'll find pre-fetch behaviours in all stages, not just log
. So the question becomes - how should we properly document it or otherwise explain, because I don't think dropping the pre-fetch mechanisms is an option AFAICS.
OK. I hope I didn't miss some documentation somewhere, but I obviously wasn't aware that pre-fetching is a thing.
It's probably worth noting that I'm running into these issues when writing tests, not in production. I'm writing custom stages and would like to test pretty specific scenarios to rule out certain concurrency issues. Having some way to turn pre-fetching off for certain stages would solve my issue. At the moment I'm creating bespoke Publisher
and Subscriber
objects which feels wrong.
Or maybe more precise methods of control over backpressure / completion / cancellation in a testing environment? It's possible that having special testing methods / stages where I can gate elements would solve my problem.
Depending on request counts is brittle and should be avoided: the signaled demand is just a means, not an end, and ideally the Subscriber will auto-tune the amount of buffering it performs in order to reach performance goals (like maximizing throughput or minimizing latency).
The following fails:
Unless you remove
.log("MIDDLEMAN")
. It appears that.log()
will preemptively request elements. This can complicate debugging complex scenarios involving preciserequest()
andcancel()
sequences, as it interrupts the true flow of data.