We currently have no proper backpressure mechanism, we simply check if ZeroMQ buffers are full on send (catching an exception) and if not, wait until the message is pushed through.
We need to (this can be divided in 2 subtasks):
Implement a simple mechanism indicating the state of backpressure in the given operator/whole pipeline (e.g. rx/tx rate, buffer utilization, etc.)
Implement an advanced backpressure mechanism with feedback: pass a reverse channel between operators to notify source if target buffer is reaching limit (similar to Flink's Credit-based Flow Control mechanism), this can improve checkpointing speed
These (both or one) may (or may not) depend on implementation of explicit task buffers for messages (need more research).
We currently have no proper backpressure mechanism, we simply check if ZeroMQ buffers are full on send (catching an exception) and if not, wait until the message is pushed through. We need to (this can be divided in 2 subtasks):
These (both or one) may (or may not) depend on implementation of explicit task buffers for messages (need more research).
https://flink.apache.org/2019/06/05/a-deep-dive-into-flinks-network-stack/ for Credit-based Flow Control mechanism