Open shashi-banger opened 1 year ago
Attaching debug logs for reference
Hi, thank you for the feedback. Consider that Mountpoint is optimized for reading large files sequentially and can prefetch data when it detects a sequential read pattern to improve throughput. This may have an impact on the memory usage, depending on the specific access pattern of different applications.
In your use cases with md5sum
and the python script, we expect Mountpoint to see mostly sequential reads and start prefetching increasingly large chunks of data, currently up to a maximum size of 2GB. That could explain the behavior you are seeing.
Also, not sure if relevant, but we have an open issue around how dropped GetObject requests (e.g. on out-of-order reads) are handled: https://github.com/awslabs/mountpoint-s3/issues/510. It may be worth tracking that and re-run your workflow once it is fixed.
Thank you for the response. Please check if it is worth considering a command line option or some configuration to limit the maximum prefetch size. This would allow users to set the memory limits for a container more reliably.
May be #510 fix will help. Will retry once it is fixed.
I second that it would be very convenient to have an option to limit the memory usage for scenarios where memory availability is limited. I understand that this will likely impact performance, but that's still better than getting OOM killed.
If I wanted to modify that behavior to preload only 128MiB for example, I'd need to modify the constants here right? https://github.com/awslabs/mountpoint-s3/blob/7dcaee0966ca20c91d86b0d8b1388bcc72a24c38/mountpoint-s3/src/prefetch.rs#L142
@CrawX yes, that's the constant you'd want to modify to scale back the prefetcher's aggressiveness.
We're currently looking into a more comprehensive way to limit memory usage; hope to have more to share on that soon!
I came across this issue while searching for excessive memory usage, my use case is reading 100 large files sequencially and concurrently, I can confirm that after updating max_request_size
to 64 MBs dramatically reduces the memory usage. But this requires a recompile, is there any plan to make this configurable?
I came across this issue while searching for excessive memory usage, my use case is reading 100 large files sequencially and concurrently, I can confirm that after updating
max_request_size
to 64 MBs dramatically reduces the memory usage. But this requires a recompile, is there any plan to make this configurable?
We don't currently plan to expose this as a configuration. Instead, we're working on improvements that will allow Mountpoint to automatically scale down the prefetching amount based on available resources. I don't have any date I can share for when this would be completed, but the work is ongoing. I hope to be able to share more news soon. (Most recent change refactoring prefetching which prepares for this work: https://github.com/awslabs/mountpoint-s3/pull/980)
Sorry for the delay in responding here!
Thanks @dannycjones, this is great news, as a workaround we are using mountpoint by patching the max_request_size value for now. I'll be waiting for your work to be completed.
I've created this issue which is where we'll share updates on the automatic prefetcher scaling: #987.
Mountpoint v1.10.0 has been released with some prefetcher improvements and might reduce memory usage. Could you please try upgrading to see if it provides any improvements for you?
Mountpoint for Amazon S3 version
mount-s3 1.0.1-unofficial+7643a22
AWS Region
us-east-1
Describe the running environment
Running on a local PC docker container. ALso experienced OOMKilled when running as a pod on AWS EKS
What happened?
docker stats
memory usage keeps increasing steadily to 2GB and abovesame behaviour with above python code execution also
Relevant log output