Closed davidni closed 8 months ago
I think 413 Payload Too Large is a better error code. What do you guys think? @TomPallister
@davidni commented on Jan 15, 2019
David, thanks for reporting! I don't see any PRs from you... Is it still relevant? The issue was created for the old legacy version of Ocelot. Do you have an intention to discuss the bug?
404 is not appropriate, and the error logged by ocelot should be more specific than UnmappableRequestError. It just so happens that the exception happens while doing request mapping, but it's really an IO problem while reading the incoming request stream. Response status shouldn't be 404. In the case of too large request, 400 might be appropriate. In other cases, something in the 5xx range.
You might be surprised, but Ocelot has strange error handling mechanism with status code overriding aka Http Error Status Codes (docs in source) feature. This is Tom's design. In future it should be redesigned. I don't like HTTP status codes overriding, because the actual status is lost in Ocelot core. And, I guess, Ocelot doesn't return actual code in custom response header.
In other cases, something in the 5xx range.
What cases are you talking about?
400 might be appropriate
Why Not? When do you create a PR with a bug fix? In my opinion, 411 Length Required, 413 Content Too Large are more appropriate, because they are more specific.
@Jalalx commented on Nov 9, 2020
Agree on 413! This is the best status code to return in case of Content-Length problems with request.
P.S. Don't reference Tom in chats. He doesn't read Ocelot GitHub notifications since 2020... Only current maintainers read notifications, it's me and Raynald.
@raman-m This is issue present with latest Ocelot version too. I am able to replicate now. Was missing to configure UseHttpSys().
@ks1990cn I would prefer to wait for a reply from issue raiser, because I need to understand his vision and idea of fixing this bug. If he will be silent then he will be unassigned. Let's wait for a couple of weeks, up to one month...
Please refer to dummy changes I made on RequestMapper.cs
on my branch https://github.com/ks1990cn/Ocelot/tree/ts/issue_729.
@raman-m I respect your statement & waiting for @davidni to come with his idea on this bug. I am just trying to understand flow and understand what I can do to fix this, if anything comes up. Meanwhile I come up with following question:-
On RequestMapper.cs
of my branch, I have created another parameterized constructor which gives different MaximumRequestBodySize
on different host server IIS, Http.Sys, kestrel
. Is there any generic way to figure out MaximumRequestBodySize
irrespective of hosted server?
note - I can see on this stackoverflow discussion they use and define server specific only while doing on Global level. https://stackoverflow.com/questions/38698350/increase-upload-file-size-in-asp-net-core
@ks1990cn OK, great! In my opinion we can wait for David's reply for a months, and years... But who knows...
If your solution is ready, you could open a PR to develop branch.
Also, pay attention to the fact, we have another PR #1724 which relates to RequestMapper
!
Sure, I am looking into it.. Just I am trying to figure out -
On which hosting server current application is hosted, because MaximumRequestBodySize
gives 30MB in every hosted server - IIS, https.sys, kestrel & we can modify it too, but how to find which one we are using in current application.
Looking into aspnetcore application too, to understand how I can find this value.
Steps to Reproduce the Problem
This happens consistently, for example, for requests that are too large for the server (
HttpSys
throws an IOException if the Content-Length of the incoming request is larger than 30MB by default -- see: AspNetCore source).
- Send a request to Ocelot, hosted on HttpSys, with Content-Length larger than 30,000,000.
- Boom!
We need to fix this issue for HttpSys only, right? But Ocelot team is not responsible for third-party deployments and hosting, except Docker and Kestrel hosting, with limited support of IIS user cases. It seems this is very rare user case. And, It will be not easy to fix that because HttpSys IIS environment must be deployed for testing...
@davidni FYI And, come back to us with ready PR please.
@raman-m The error handling is hiding the 413 exception payload too large. I've tested this with Kestrel, it's throwing a BadHttpRequestException
. I will create a PR as soon as I have some time...
catch (BadHttpRequestException badHttpRequest)
{
return badHttpRequest.StatusCode == 413 ? new ErrorResponse<HttpRequestMessage>(new PayloadTooLargeError(badHttpRequest)) : new ErrorResponse<HttpRequestMessage>(new UnmappableRequestError(badHttpRequest));
}
Expected Behavior
If an
IOException
happens while reading the incoming request body (inRequestMapper.MapContent
here), Ocelot fails the request with anUnmappableRequestError
and status code 404.404 is not appropriate, and the error logged by ocelot should be more specific than
UnmappableRequestError
. It just so happens that the exception happens while doing request mapping, but it's really an IO problem while reading the incoming request stream. Response status shouldn't be 404. In the case of too large request, 400 might be appropriate. In other cases, something in the 5xx range.Steps to Reproduce the Problem
This happens consistently, for example, for requests that are too large for the server (
HttpSys
throws an IOException if the Content-Length of the incoming request is larger than 30MB by default -- see: AspNetCore source).Ocelot returns 404, and logs
UnmappableRequestError
.Specifications