Open deadpassive opened 3 years ago
Observing the same issue as you, if you're using Table
for state, there is a similar discussion in this slack thread.
we use the threaded producer for the changelogs that gives better performance
which I believe refers to this
You might want to give that a shot. I cannot help much further, trying to setup first faust pipeline myself.
Steps to reproduce
We have several streaming agents, some update tables, some send to other topics. In a steady state this seems to work fine, however when ingesting peaking data (e.g. when resetting consumer offsets to 2, ingesting large amounts of data) we get a bunch of warnings (functionality seems to be fine).
Expected behavior
We don't get any warnings
Actual behavior
We get a bunch of warnings about producer buffer size.
Attempted fixes
We've managed to resolve this by: Increasing max_messages:
Adding await asyncio.sleep(0.1)
Full traceback
Versions