When downloading objects in chunks using for instance TransferManager with multiPartCopyPartSize, the memory usage of this mock is proportional to object size * multipart count. This is irrespective of backend.
For example, if I have a 300 MB object, and I download this in chunks of 30MB, resulting in 300MB/30MB=10 parallel downloads, the mock uses 300MB*10~=3GB of memory. This is prohibitive.
I suspect this is due to the fact that each of the handlers read the data from the provider in the form of GetObjectData. GetObjectData will always return the full file byte array. So if you have 10 actors all reading the entire file into memory, this memory bloat is the consequence.
When downloading objects in chunks using for instance
TransferManager
withmultiPartCopyPartSize
, the memory usage of this mock is proportional toobject size * multipart count
. This is irrespective of backend.For example, if I have a 300 MB object, and I download this in chunks of
30MB
, resulting in300MB/30MB=10
parallel downloads, the mock uses300MB*10~=3GB
of memory. This is prohibitive.I suspect this is due to the fact that each of the handlers read the data from the provider in the form of
GetObjectData
.GetObjectData
will always return the full file byte array. So if you have 10 actors all reading the entire file into memory, this memory bloat is the consequence.