Some substreams are huge (large number of stores). Some substreams are tiny (no stores).
Trying to set the maximum number of current substreams to run on a single container is hard.
How about a new feature for substreams where you can put a multiplier on the complexity of the substream to better balance the load. We have a substream with 7 stores in a single stage. This gobbles a lot of ram and crashes the tier2 nodes. If we could say every "store" is an extra 0.5, then this substream would cost 1 + (7 x 0.5) = 4.5 instead of just 1 in the context of evaluating the capacity in substreams-tier2-max-concurrent-requests. The "0.5" would be configurable.
Some substreams are huge (large number of stores). Some substreams are tiny (no stores).
Trying to set the maximum number of current substreams to run on a single container is hard.
How about a new feature for substreams where you can put a multiplier on the complexity of the substream to better balance the load. We have a substream with 7 stores in a single stage. This gobbles a lot of ram and crashes the tier2 nodes. If we could say every "store" is an extra 0.5, then this substream would cost 1 + (7 x 0.5) = 4.5 instead of just 1 in the context of evaluating the capacity in
substreams-tier2-max-concurrent-requests
. The "0.5" would be configurable.