Closed Schroedingers-Cat closed 3 months ago
About the AWS limit (RAM usage): I actually had a PR open in the original Baget repo; it was never merged so I ended up forking Baget to get that fix. https://github.com/bagetter/BaGetter/blob/main/src/BaGetter.Aws/S3StorageService.cs#L76
Compare to the version we use in "production" (in-house): https://github.com/coveooss/BaGet/blob/master/src/BaGet.Aws/S3StorageService.cs#L72
For some reason I can't explain, the original code copies the whole stream to a MemoryStream; this is completely unnecessary as the AWS SDK is perfectly happy uploading from any stream.
If you have a PR open for your 8GB limit, let me know I'll chip in and fix the AWS part too.
Absolutely, go ahead and make a PR for the stream fix, thanks!
Describe the bug
Uploading a nuget package bigger than 2GB (happens easily when the nuget package is a chocolatey package with embedded installer) causes the following error on the server (Linux, Docker, latest):
The problem already existed in the original baget and was resolved by increasing the MultipartBodyLengthLimit to 2GB.
To Reproduce
Steps to reproduce the behavior:
nuget push packagename serverurl
Expected behavior
I'd expect the upload to succeed.
Additional context
@Regenhardt made a test build with the MultipartBodyLengthLimit increased to 8GB which fixed the problem for nuget packages smaller than 8GB. We observed that for local file storage, there's no need to have 8GB of free RAM when uploading a package of 8GB because writing to that storage type happens via file streams. Therefore, it would make sense to make the limit configurable by the user. This is not the case for cloud storages like AWS so that configuration should only apply for the local file storage.