aws / amazon-s3-plugin-for-pytorch

Apache License 2.0
168 stars 21 forks source link

Allow increasing executorPoolSize #13

Open cobookman opened 2 years ago

cobookman commented 2 years ago

A p4d.24xl offers 4x100Gbps of throughput. 25 threads will most likely not max out available bandwidth. Allowing configuration of executorPoolSize would allow for more threads, and faster s3 throughput.

I'd need to run a test with this library, but recently I saw 100Gibps of throughput to a m5n.24xl using ~90 threads downloading from s3, where-as with 25 threads downloading from s3 I got just 44.286Gibps of throughput.

Currently this library hard-codes 25 threads for s3 downloads: https://github.com/aws/amazon-s3-plugin-for-pytorch/blob/38284c8a5e92be3bbf47b08e8c90d94be0cb79e7/awsio/csrc/io/s3/s3_io.cpp#L46

ydaiming commented 2 years ago

@cobookman

We're upstreaming the amazon-s3-plugin-for-pytorch into the torchdata package (https://github.com/pytorch/data/pull/318). We're dropping support for this plugin.

The current s3 plugin doesn't have this feature, so do the new S3 IO datapipes. We'll backlog this feature request, and update the feature in the new S3 IO datapipes.