microsoft / ApplicationInsights-dotnet

ApplicationInsights-dotnet
MIT License
568 stars 285 forks source link

Exclude WebSocket Requests from "slow response time" rules of SmartDetection #1379

Open ndreisg opened 6 years ago

ndreisg commented 6 years ago

Hi there,

my WebApp handles WebSocket requests as well as normal HTTP requests. These days, I received some "slow response time" notification emails from AI for this WebApp. The response time of WebSocket Requests is naturally very high (time between open and close handshake).

Is it possible to exclude WebSocket requests from SmartDetection, so that I only get notifications if the response times of the normal HTTP requests are slow?

Best regards Alex

ndreisg commented 6 years ago

Today I realized, that this problem also causes the Web App to recycle more frequently. In Application Events I had three Events with ID 2299 today (Worker Process requested recycle due to 'Percent Slow Requests' limit). Is it possible to deactivate this 'Percent Slow Requests' limit?

SergeyKanzhelev commented 5 years ago

Are you using .NET SDK? You can use telemetry processors to exclude web socket requests https://docs.microsoft.com/azure/azure-monitor/app/api-filtering-sampling

jeffputz commented 4 years ago

This does seem like something that should be configurable out of the box.

rickdgray commented 3 years ago

Same issue. This should be accounted for. App Insights treats this as an issue but this behavior is expected.

github-actions[bot] commented 2 years ago

This issue is stale because it has been open 300 days with no activity. Remove stale label or comment or this will be closed in 7 days.

rickdgray commented 2 years ago

Bumping because this should be fixed

github-actions[bot] commented 2 years ago

This issue is stale because it has been open 300 days with no activity. Remove stale label or this will be closed in 7 days. Commenting will instruct the bot to automatically remove the label.

jeffputz commented 2 years ago

Boing

igristov commented 1 year ago

4 years old problem and no solution yet? Basically, "metricNamespace":"Microsoft.Web/sites","metricName":"HttpResponseTime" should not include WebSocket connections because they can last for hours and it is not what you expect to monitor or trigger alerts on.

cijothomas commented 1 year ago

"metricNamespace":"Microsoft.Web/sites","metricName":"HttpResponseTime"

^ This is not something produced by the Application Insights SDK.

Zurina commented 1 year ago

Any update on this issue? I'm also running an App Service, which deals with web sockets with long running connections, we experience spikes in our average response time metric up to 1 hour.

cijothomas commented 1 year ago

https://github.com/microsoft/ApplicationInsights-dotnet/pull/2372/files Fixed SDK to exclude long running SignalR connections.

SmartDetection is not part of this SDK. "metricNamespace":"Microsoft.Web/sites","metricName":"HttpResponseTime" - this metric is also not part of this sdk, so unfortunately, can't help.

ndreisg commented 1 year ago

You could help by forwarding the issue to the right team

Zurina commented 1 year ago

Okay, can you point me in the right direction with a link?

cijothomas commented 1 year ago

I do not know if teams outside of SDK have Github repo for reporting issues. If you see issue with application insights sdk produced metrics after https://github.com/microsoft/ApplicationInsights-dotnet/pull/2372/files, this is the right forum.

If it is about smartdetection - my only suggestion is to open Azure support ticket. "metricNamespace":"Microsoft.Web/sites","metricName":"HttpResponseTime" - If this is about who is producing this metric, my best guess is, its automatically done by Azure WebApps, so you need to create a support ticket for that too.

jeffputz commented 1 year ago

It's pretty weird when someone inside the company asks someone outside of the company to influence a product decision, especially going through support, which I suspect we've all had, at best, mediocre interactions with.

rafaelgrilli92 commented 1 year ago

The issue is still happening and took me a while to understand that it wasn't a real problem with our application and I can't believe it hasn't been looked at yet after all these years...

cijothomas commented 1 year ago

The issue is still happening and took me a while to understand that it wasn't a real problem with our application and I can't believe it hasn't been looked at yet after all these years...

https://github.com/microsoft/ApplicationInsights-dotnet/pull/2372/files Has fixed this.

rafaelgrilli92 commented 1 year ago

The issue is still happening and took me a while to understand that it wasn't a real problem with our application and I can't believe it hasn't been looked at yet after all these years...

https://github.com/microsoft/ApplicationInsights-dotnet/pull/2372/files Has fixed this.

Was this fix released? I have the latest SDK for .Net (2.21.0) and I'm still getting very high response times on application insights while the response time of the rest of the application seems to be ok. Any suggestions? image

Edit: Actually, sorry, I updated the SignalR SDK, not the AppInsights. I will try

akeijzer11 commented 3 months ago

is this issue still open? @rafaelgrilli92 can you tell us if the issue is fixed in the updated AppInsights package?

we are at the latest SDK for .Net (8.0.6) and we too are seeing these response times, our overall average response times are not usable anymore. we use Azure.Monitor.OpenTelemetry.AspNetCore (1.2.0) but are willing to switch back if the issue is fixed in Microsoft.ApplicationInsights.AspNetCore (2.22.0) image

can we assume that OpenTelemetry doesn't support this fix even though adopting OpenTelemetry is "recomended"?

the 'http.long_running' property is included in the request: image

related feature requests: https://feedback.azure.com/d365community/idea/0bde8e7e-a625-ed11-9db1-000d3a4d9566 https://feedback.azure.com/d365community/idea/70486e04-c299-ed11-a81b-6045bd79fc6e

rafaelgrilli92 commented 3 months ago

@akeijzer11 not really. I ended up setting up the Azure SignalR instead of using our backend server to handle the connections, so now the weird response times are gone.

cijothomas commented 3 months ago

we use Azure.Monitor.OpenTelemetry.AspNetCore (1.2.0)

I don't think OTel has fixed this, would be good to report this in the AzureMonitor OpenTelemetry repo https://github.com/Azure/azure-sdk-for-net/issues/new/choose

@rajkumar-rangaraj FYI, who owns the AzureMonitor Distro share plans for addressing this in OTel distros.

This is fixed in Application Insights.