Closed graebm closed 3 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 89.54%. Comparing base (
cc06c41
) to head (f1390ae
).
also can you add a section on forced buffers to https://github.com/awslabs/aws-c-s3/blob/main/docs/memory_aware_request_execution.md. we probably want to keep it up to date
Issue:
aws_s3_meta_request_write()
must write to a buffer immediately if the data is less than part-size. Currently, it uses a buffer hooked up the to default allocator (code here). We'd like to get these buffers from the buffer-pool, to reduce memory fragmentation and resident set size (RSS).The problem is: the buffer-pool maintains strict memory limits, and won't allow further allocations when that limit is hit. But
aws_s3_meta_request_write()
MUST get a buffer immediately, or else the system can deadlock (see description in PR https://github.com/awslabs/aws-c-s3/pull/418)Description of changes: Add
aws_s3_buffer_pool_acquire_forced_buffer()
. These buffers can be created even if they exceed the memory limit.Future work: Modify
aws_s3_meta_request_write()
to use this new functionAdditional thoughts:
aws_s3_meta_request_write()
should limit the total number of uploads like:max-uploads = memory-limit / part-size
. That was the case even before this PR.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.