Open Tino92 opened 4 years ago
I experience the same issue on .NET 7.0, and I also have the global exception handler middleware but this not helping me :P
If you want, I can provide the simple API source codes (approx. 150-200 lines) for inspection, which are causing this specific error under load (stress tests).
For us on .Net 6 issue is triggered by updating one package. Any other package didn't cause issues
<PackageReference Include="Microsoft.Azure.Cosmos" Version="3.29.0" />
to
<PackageReference Include="Microsoft.Azure.Cosmos" Version="3.35.2" /> (this we tried in july 2023)
<PackageReference Include="Microsoft.Azure.Cosmos" Version="3.36.0" /> (this in december 2023)
<PackageReference Include="Microsoft.Azure.Cosmos" Version="3.39.0" /> (this in april 2024)
<PackageReference Include="Microsoft.Azure.Cosmos" Version="3.31.2" /> (this in 29-30 april 2024)
So we tried upgrade to .net 8(without updating Microsoft.Azure.Cosmos) and it also caused the same issue so it wasn't only releated to one package. The diffrence is that it logs like 499 instead of 400. We then tried updating other Azure packages But didn't solved issue
We gonna test updating System.Text.Json to latest version also(we skipped that because there are some breaking chages between 6 and 8)
tldr; On .net 6 certain packages cause spikes in Kestrel (Like Cosmosdb ) On .net 8 it doesn;t matter upgrading .net is enough with mandatory packages to cause issue.
Also it only shows under stresstest or production. Its very hard to reproduce.
I'm seeing this now on .net 8 Isolated after upgrading from .net 6 / in-proc. Does anyone have a filter for these exceptions?
Experiencing the same issue with my .NET 6.0, .NET 7.0, and .NET 8.0 apps. Surprisingly, this only happens when the apps are under load. Any resolution or workaround would be super helpful. Thank you.
How to log that exception if it availbe only if Kestrel in debug info?
If you want, I can provide the simple API source codes (approx. 150-200 lines) for inspection, which are causing this specific error under load (stress tests).
@taylaninan can you provide it please? I'd try to make a workaround for this and I need smth to test on. I have an idea to patch the method which cause this problem via Harmony library
I haven't gotten that problem for more than 2 weeks yet even with the high load, what I did is subscribed to AppDomain.CurrentDomain.UnhandledException. I also moved my API from Ubuntu to Debian, and moved from systemctl to Dockerfile to launch my API, I am not sure if that was the solution but error is gone or at least the error is logged and the API not being crashed
I am able to replicate the error by having postman call with 100 request with 5 mins ramping up. I got approx 58% failure with the same error message.
Usually that kind of error means request was aborted from client side
@supermihi , sorry for missing this message, you can do what as @deleteLater proposed.,很抱歉错过了这条消息,您可以按照建议执行操作。 For our project, we use a custom exception handler middleware to ignore the
BadRequestException
对于我们的项目,我们使用自定义异常处理程序中间件来忽略BadRequestException
public class CustomExceptionHandlerMiddleware { private readonly ILogger<CustomExceptionHandlerMiddleware> _logger; private readonly RequestDelegate _next; public CustomExceptionHandlerMiddleware(ILogger<CustomExceptionHandlerMiddleware> logger, RequestDelegate next) { _logger = logger; _next = next; } public async Task Invoke(HttpContext context) { try { await _next(context); } catch (BadHttpRequestException ex) { _logger.LogWarning(ex, "BadHttpRequestException"); } catch (Exception exception) { _logger.LogError(exception, $"Exception when invoke {context.Request.Method}:{context.Request.GetDisplayUrl() } } }
Hello Why do I define a middleware that does not have an exception, but a BadHttpRequestException that occurs normally after the next method
This is still happening. We are using dotnet 8 and are running on a k8s on Azure. As soon as we stress the pod a little with multiple concurrent connections the exception is being thrown.
@AnisTigrini are you using app insights profiler?
services.AddServiceProfiler
After I disabled this, it fixed this issue for me. This is the info that got me there: https://stackoverflow.com/questions/77855606/should-we-enable-azure-application-insights-profiler-in-production
Hey there @michaelmarcuccio thanks for the quick reply. We actually do not have the app insight profiler it enabled on azure. So it makes me think it might be the number of concurrent connection setup maybe.
By the way, for anyone that experienced the same problem, here is what I found. So we are running a pod with dotnet 8 using the official Microsoft image mcr.microsoft.com/dotnet/sdk:8.0
.
The pod we had is a dotnet REST API that does a lot of HTTP calls to other services. It's kind of a proxy to be honest.
I tried to contact API enpoints that do not make any HTTP calls, and the server returned a response immediately, so I knew that the problem was occurring when the server was making HTTP calls to other services.
I tried to take a look by downloading netstat into the pod and realized that the problem was socket starvation. In summary, we were using a deprecated API that Dotnet recommended avoiding. (WebClient).
The solution for us was to do a migration and replace all those calls with the recommended API (HTTPClient). Furthermore, you should avoid using that class once per request, as it can also cause socket starvation. It is meant to be used as a singleton or using the HttpClientFactory with DI.
I hope this helps some of you that are dealing with the issue.
We are facing intermittently BadHttpRequestException: Unexpected end of request content.
We are running on .NET Core 3.1.5, this exception seemed to only appear when we moved over to .NET Core 3.0
There have been similiar issues opened in the past, https://github.com/dotnet/aspnetcore/issues/19476#issuecomment-629457165 and https://github.com/dotnet/aspnetcore/issues/6575.
This exception seems to be thrown when a client aborts mid-request. My question is, should this be logged as a warning instead of an exception? It creates a lot of noise in our logs.