Open cdavidsoAtSbux opened 1 year ago
@cdavidsoAtSbux can you clarify whether the issue here is initial message processing (application activation picking up a message as it is added to Service Bus) or throughput?
To better understand what is happening, it would be helpful to have your application name so we can take a closer look at the logs. Alternatively, if you have the ability to engage support, that might be the best/quickest path to understand what is happening here.
Could you please share the version of the worker packages you're using?
This issue has been automatically marked as stale because it has been marked as requiring author feedback but has not had any activity for 4 days. It will be closed if no further activity occurs within 3 days of this comment.
Hello and thanks for jumping on the thread...I actually opened this at the request of the MS support tech I was working with...we've since identified that the problem centered around a different timer function that was polling the service bus topic subscriptions every 5 seconds and causing a ServerBusyException which was in turn throttling the access to the topic subscriptions.
Our solution was to turn down the timer function from 5 seconds to 15 minutes...which causes a lot less issues...although it does still throw the ServerBusyExceptions, it is far less that it was and isn't causing our flow to stall when it shouldn't.
If you happen to know if we should be seeing these exceptions at all...and if so, what's an acceptable rate for them to fire so I can create an alert around it...that would be super awesome. :)
@JoshLove-msft any chance you have more context on ServerBusyException's with servicebus, and can help answer this question:
If you happen to know if we should be seeing these exceptions at all...and if so, what's an acceptable rate for them to fire so I can create an alert around it...that would be super awesome. :)
Summary of .net related changes we've made • We took code original written for .net core 2.2 and migrated it to .net 6 • The new functions are ServiceBus topic trigger functions deployed to run as dotnet-isolated which run on a shared Premium App Service plan. The plan is shared across all of our products functions....which is 19 separate azure function resources. • We had to convert from using HttpRequestMessage and HttpResponseMessage to using HttpRequestData and HttpResponseData as return types in the functions • We moved from registering DI types from the constructor of the function class to registering them in a top level Program.cs and they are now injected into function constructors
Issue • While the new functions compile and run locally just fine we are finding when deployed into azure they are slow to fire as new messages are being added onto the ServiceBus topic.
Attempts to Mitigate • We felt it was possible that our ARM template was too far out of date so we created one from scratch using guidance from the link https://learn.microsoft.com/en-us/azure/azure-functions/functions-infrastructure-as-code?tabs=json. This had no affect...the newly deployed resource looked identical to the old one and behaved in exactly the same way. • We went to "https://resources.azure.com/" to change the .netFrameworkVersion setting from v4.0 to v6.0. The only affect this had was it made the Function Runtime setting go from custom (~4) to just (~4) . The function behaved exactly the same though. This was a suggestion obtained thru the following link: https://techcommunity.microsoft.com/t5/apps-on-azure-blog/issues-you-may-meet-when-upgrading-azure-function-app-to-v4/ba-p/3288983 • We adjusted the idle session timeout from the default 30 seconds to 10 seconds which did not seem to have an affect. • We adjusted instances on the ASP from 1 to 8. At this point the functions started behaving as they did when they were .net core 2.2 running on a single instance.