Would be nice to understand how concurrency is handled, similar to the "Concurrency" section on the Queue storage documentation.
I see that max concurrency is configurable in host.json, under serviceBus.maxConcurrentCalls, but it took me a little digging to find it.
Moreover, I'm not sure how the concurrency is determined, and what the relationship to scaling the functions is.
Example: If we get 200,000 queued messages in the service bus queue, will the function scale out, and the 16 be multiplied by each function? What determines how many are pulled at a given time, and when scaling would occur?
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 1d50399c-0873-e903-35ca-1845c84b6c6c
Version Independent ID: 1a3150b4-5eb9-0ac5-ee2b-d1bb6b6fd63e
Would be nice to understand how concurrency is handled, similar to the "Concurrency" section on the Queue storage documentation.
I see that max concurrency is configurable in host.json, under serviceBus.maxConcurrentCalls, but it took me a little digging to find it.
Moreover, I'm not sure how the concurrency is determined, and what the relationship to scaling the functions is.
Example: If we get 200,000 queued messages in the service bus queue, will the function scale out, and the 16 be multiplied by each function? What determines how many are pulled at a given time, and when scaling would occur?
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.