Open SomeProgrammerGuy opened 4 weeks ago
Thanks for writing that up and sorry for the time and frustration that led to it. It's true that huge files don't "just work", at least in MVC, but the particular bottleneck will vary from app to app. The docs you linked pointed to a file upload sample that can handle 5GB files with minor adjustments.
I had some trouble following where HeadersLengthLimit
came into the problem - can you tell me more about that? Are you putting the large file contents in the request header?
Hi @amcasey, If you read through the code snippet I have posted it is commented (hopefully clearly) as exactly what is happening with "where HeadersLengthLimit came into the problem". In a nutshell it "drains" the _currentStream first using the hardcoded (you cannot change) "HeadersLengthLimit = DefaultHeadersLengthLimit" passed to the LengthLimit before it gets to the LengthLimit = BodyLengthLimit by throwing an exception.
This is using postman to simple post a multipart file with nothing special in the Postman setup. It works fine with file roughly up to 2GB from 'My' memory. Now even if Postman is sending something strange (which I doubt it is, this should still never happen.
Again as stated above, the size of the header and the body can technically be any size in the RFC so it shouldn't be being restricted at all here. (Yes in firewalls, web servers, etc. then yes (as it is a security risk.)
I will have a look at your adjustments when I have a little time later.
Hi @amcasey,
I had a quick look at your link, but I couldn't fully understand what specific solution you were suggesting (admittedly, it was a quick glance). I attempted to make similar changes in my setup, but unfortunately, they didn't resolve the problem in my simple test case.
The existing code is quite complex, and I think it would be very beneficial to have a straightforward example, as I outlined in my original post. Here's what I'm looking for in basic terms:
Program.cs
for Kestrel configurations to handle such uploads.I've already started this process, as seen in my original comment.
If we can get a basic scenario like this to work, I believe it would serve as an essential demonstration for documentation—showing that .NET can handle a large direct file stream to disk in the simplest form possible. From there, developers can decide on best practices or more advanced implementations, but the foundational "walk before you run" approach needs to work first.
In conclusion note the code in the original post under "Once I got the code running somewhat correctly, it still would not allow the file to progress past 3,854,123 bytes. It just sits there until my own timer runs out."
I agree that the existing sample is inadequate but, as you say, let's walk before we run. If you grab the branch from my PR, you should be able to compile a simple web app that has the basic characteristics you want - from a page, you can post a large file and have it written directly to disk on the server. I happen to have tested 5GB, but the difference between 5GB and (e.g.) 10GB shouldn't matter - the 32 bit boundaries are at 2/4GB.
In that branch you can open, build, and run aspnetcore\mvc\models\file-uploads\samples\3.x\SampleApp\SampleApp.csproj
in VS 2022. (I happen to be on 17.12.0 Preview 3, but you should be able to use an older version, downgrading the TFM to 8.0, if necessary.) You should see a page like this:
The interesting link is the last one. The handler is in UploadPhysicalFile. It doesn't require a custom MultipartReader
or modify (AFAICT) the HeaderLengthLimit
.
My best guess is that your app is missing an update to KestrelServerOptions.Limits.MaxRequestBodySize
, but it's hard to say without a buildable repro.
@amcasey Your new code above errors exactly as my original post. "Multipart body length limit 16384 exceeded." the reason why being explained in great detail in that post.
Did you attempt to upload a very big file to your new API endpoint using Postman as described?
Again as to my last post this should start from the basics.
Just stream in chunks any file directly to disk without using any of the multipart upload library.:
Now this doesn't even need to be in a project.
This code can simply be two code windows here.
The first containing any Kestrel etc. settings in Program.cs.
And an API class with one method that does the above.
I have already posted a good start to this already in one of my earlier posts:
[HttpPost]
[Route("api/testupload")]
[Every Possible Version of size limits and disable form stuff ect and in program.cs]
public async Task<IActionResult> TestLargeSingleFileUploadAsync()
{
// There is a lot of stuff for logging and just trying to test and bodge the thing to work.
// Don't use this in production without going throuugh it carefully first.
CancellationToken cancellationToken = HttpContext.RequestAborted;
bool uploadCompleted = false;
try
{
Log.Information("User is attempting to upload a file.");
// Define the path and filename where the incoming file will be saved.
string uploadingFilePath = Path.Combine(_appsettings.PublicUploadsDirectoryPath, "uploadedFile.txt");
// Use a reasonable buffer size for large file uploads.
const int bufferSize = 16 * 1024 * 1024; // 16 MB buffer size.
// Set a logging threshold of 200 MB.
const long loggingThreshold = 200 * 1024 * 1024; // 200 MB
// Keep track of the next logging threshold.
long nextLoggingThreshold = loggingThreshold;
// Use a FileStream to write the incoming data with a specified buffer size.
using (FileStream fileStream = new(
uploadingFilePath,
FileMode.Create,
FileAccess.Write,
FileShare.Read,
bufferSize: bufferSize, // Use the defined buffer size.
FileOptions.Asynchronous)) // Asynchronous I/O without WriteThrough for better performance
{
PipeReader bodyReader = HttpContext.Request.BodyReader;
long totalBytesRead = 0;
long totalBytesWritten = 0;
while (true)
{
// Read from the body reader using the cancellation token.
Task<ReadResult> readTask = bodyReader.ReadAsync(cancellationToken).AsTask();
// Check if the read task completes within the timeout (e.g., 5 seconds).
if (await Task.WhenAny(readTask, Task.Delay(10000, cancellationToken)) != readTask)
{
Log.Warning("Read operation timed out, possibly because the client has stopped sending data.");
throw new TimeoutException("Read operation timed out, possibly because the client has stopped sending data.");
}
ReadResult readResult = await readTask;
ReadOnlySequence<byte> buffer = readResult.Buffer;
long bytesRead = buffer.Length;
if (readResult.IsCompleted && buffer.IsEmpty)
{
Log.Information("Completed reading from the request body. Total bytes read: {TotalBytesRead} bytes.", totalBytesRead);
break;
}
totalBytesRead += bytesRead;
// Write the buffer segments to the file stream.
foreach (ReadOnlyMemory<byte> segment in buffer)
{
await fileStream.WriteAsync(segment, cancellationToken);
totalBytesWritten += segment.Length;
// Log only after exceeding the next 200 MB threshold.
if (totalBytesWritten >= nextLoggingThreshold)
{
Log.Information("Total bytes written to file so far: {TotalBytesWritten} bytes.", totalBytesWritten);
nextLoggingThreshold += loggingThreshold; // Update to the next 200 MB threshold.
}
}
// Mark the buffer as consumed.
bodyReader.AdvanceTo(buffer.End);
// Check for cancellation before continuing the next read.
cancellationToken.ThrowIfCancellationRequested();
}
// Perform a single flush at the end of the upload.
await fileStream.FlushAsync(cancellationToken);
Log.Information("Final flush completed. Total bytes written to disk: {TotalBytesWritten} bytes.", totalBytesWritten);
Log.Information("User has successfully uploaded the file. Total bytes written: {TotalBytesWritten} bytes.", totalBytesWritten);
uploadCompleted = true; // Mark the upload as successfully completed.
}
return NoContent();
}
catch (OperationCanceledException)
{
Log.Warning("File upload was cancelled by the client.");
return StatusCode(StatusCodes.Status499ClientClosedRequest, "File upload cancelled by client.");
}
catch (TimeoutException timeoutException)
{
Log.Warning(timeoutException, "File upload timed out.");
return StatusCode(StatusCodes.Status408RequestTimeout, "File upload timed out.");
}
catch (Exception exception)
{
Log.Error(exception, "An unexpected error occurred during file upload.");
return StatusCode(StatusCodes.Status500InternalServerError, exception.Message);
}
finally
{
// Delete the incomplete file only if the upload did not complete successfully.
// if (!uploadCompleted)
// {
// string incompleteFilePath = Path.Combine(_appsettings.PublicUploadsDirectoryPath, "uploadedFile.txt");
// if (System.IO.File.Exists(incompleteFilePath))
// {
// try
// {
// System.IO.File.Delete(incompleteFilePath);
// Log.Information("Incomplete file '{FilePath}' has been deleted.", incompleteFilePath);
// }
// catch (Exception fileException)
// {
// Log.Error(fileException, "Failed to delete incomplete file '{FilePath}'.", incompleteFilePath);
// }
// }
// }
}
}
Please remember that you've been thinking about your scenario for much longer than we have, so things that seem obvious with your experience and context are less so for us.
Posting code fragments is useful for illustration purposes, but working programs are much easier to validate and debug. A toy repo on GitHub would be ideal if there's something you'd like to demonstrate.
I'm having some trouble following what limit you're currently encountering: is it at 16 KB, 1 GB, or 6 GB? Do they all fail in the same way or are there multiple failure modes?
The purpose of the example PR I shared was to demonstrate that Kestrel can handle very large file uploads - it may not be directly applicable to your scenario. I happen to have used the upload page already present in the sample, but I agree that it's important to work with multiple clients. If you use the page in the sample, does that work on your box? Are you only seeing problems with postman or have you tried other clients as well?
For security reasons, we're not allowed to run postman on our network. If the problem can't be reproduced with, say, curl, a pcap (without TLS) would be a useful investigative resource.
Is there an existing issue for this?
Describe the bug
Large File Upload Issue (still) with MultipartReader in ASP.NET Core -8+
Problem (one of many) Summary
The real problem even noting the below is that .NET / ASP.NET is simply incapable of uploading large files 4GB+ which frankly is rather a joke in 2025. With .NET 9 on the horizon and yet this fundamental web task still seems to be ignored.
Note: This is running in Visual Studio 2022 only using Kestrel. No other web server is involved. Before anybody starts commenting about the various Kestrel settings that can be changed I have pretty much tried them all in some form. (unless there is some hidden strange one not commonly known.)
LengthLimit
is hard coded toHeadersLengthLimit
(16KB) during the constructor ofMultipartReader
.HeadersLengthLimit
before it's used because it's set after the constructor completes.InvalidDataException
, preventing large file uploads.Why This Is an Issue
HeadersLengthLimit
before it's enforced means developers cannot handle large file uploads. Given the violation of the Single Responsibility Principle discussed below, this limitation shouldn't even exist.multipart/form-data
, there is no specified maximum size for multipart uploads or headers. Therefore, this hard coded limit imposes an unnecessary restriction that is not compliant with the RFC.MultipartReader
class hinders developers from implementing essential functionality.Previous Reports of This Issue (still after seven years, I mean really)
This issue has been reported to yourselves multiple times over the years:
Violation of the Single Responsibility Principle
Now I always note "Rules are for the obedience of fools and the guidance of wise men." but in this case it seems a good call.
MultipartReader
class is responsible for parsing multipart data but also imposes hard coded limits on headers and maybe body sizes as well without allowing developers to adjust them .Issue Description:
Struggles and Final Thoughts
After struggling with this issue, I decided to write my own Multipart handler. But first, I wrote some code to directly take the uploaded stream and write it straight to a file without loading everything into memory. I tested it by attempting to upload a 5 GB file using Postman.
The following overly commented and overly verbose (not for production) code exists because if you cancel the upload in Postman or simply close Postman halfway through, Kestrel just keeps chugging along like nothing is wrong. It continues until it empties what I assume is its cache, and then just freezes. I’m sure there must be some timeout configuration to address this, but whatever it is, it’s incredibly long—almost like a DDoS/hacker's dream.
Once I got the code running somewhat correctly, it still would not allow the file to progress past 3,854,123 bytes. It just sits there until my own timer runs out.
Conclusion
After spending a week battling with this, I realised that ASP.NET and Kestrel fall short when it comes to large file uploads. The lack of clear documentation and flexibility makes it evident that they are not well-suited for handling this use case. This isn’t just a small oversight—it’s a fundamental flaw that I think requires a complete redesign, not just a minor fix.
What Did I Do in the End?
Although my Rust programming skills are a bit rough around the edges, I used Axum, and it just works. Hardly any code, really fast, and even though Rust is difficult to program, it didn’t take long to get it working.
I only add this for anyone in 2024 trying to do this with .NET: Don’t waste the week I’ve spent trying to get it to work. Consider alternatives like Axum if you need reliable large file upload support.
If somebody has managed to get this to work in .NET (without loading the whole file in memory for note) I would love to see a Full working example including all config around it, .NET and Kestrel.
References
Expected Behavior
To be able to Upload Large files...
Steps To Reproduce
No response
Exceptions (if any)
No response
.NET Version
.NET 8
Anything else?
No response