mosaicml / streaming

A Data Streaming Library for Efficient Neural Network Training
https://streaming.docs.mosaicml.com
Apache License 2.0
1.07k stars 136 forks source link

Reuse S3 session #622

Open wouterzwerink opened 6 months ago

wouterzwerink commented 6 months ago

🚀 Feature Request

Currently when I use S3 with an IAM role, I see StreamingDataset fetch new credentials for every shard: image There is a never ending stream of credential logs after this

That's quite inefficient, getting credentials from IAM roles is not that fast. Would be nicer to reuse credentials until they expire

Motivation

Faster is better!

[Optional] Implementation

I think it would work to just reuse the S3 Session object per thread

Additional context

snarayan21 commented 5 months ago

Hey! If it's not too much of a hassle, mind submitting a PR with your proposed change? I'd be happy to review

wouterzwerink commented 5 months ago

Hey! If it's not too much of a hassle, mind submitting a PR with your proposed change? I'd be happy to review

Sure! I made a fix for this that worked earlier, but will need to clean it up a bit before submitting. Will take a look somewhere next week

snarayan21 commented 5 months ago

Perfect, thank you @wouterzwerink! Feel free to tag me when the PR is up.

snarayan21 commented 5 months ago

@wouterzwerink Hey, just wanted to follow up on this, mind submitting a quick PR if/when you have some time? Thanks!!

huxuan commented 1 month ago

I am interested in this issue (actually we need it for potential performance improvement). I think the problem is in which level we want to keep a boto3 seesion. Maybe keep one seesion for each stream? If so, I suppose to create an s3 client in stream and reuse it when trigger download_file() in Stream._download_file(). Any comments?

karan6181 commented 1 month ago

@huxuan Are you seeing any performance degradation with the current approach? If yes, by how much?

huxuan commented 1 month ago

@huxuan Are you seeing any performance degradation with the current approach? If yes, by how much?

I have not done that yet, maybe I can implement a draft version for comparsion.

wouterzwerink commented 1 month ago

@huxuan I ended up abandoning this after increasing the shard size, which made the S3 overhead negligible. Perhaps that will work for you as well?

huxuan commented 1 month ago

@huxuan I ended up abandoning this after increasing the shard size, which made the S3 overhead negligible. Perhaps that will work for you as well?

Thanks for the response. We saved feature vectors in the data, so the sample size is relatively large (about 12 MB per sample). We are already using 200 MB as the size_limit, resulting in approximately 16 samples per shard and a shard size of about 100 MB with zstd (default level 3) compression. IIUC, with a larger shard size, we also need to increase the sampling_granularity to avoid adding more stress to the network.

Any comments are welcome.