pytorch / torcharrow

High performance model preprocessing library on PyTorch
https://pytorch.org/torcharrow/beta/index.html
BSD 3-Clause "New" or "Revised" License
647 stars 79 forks source link

Does torcharrow support industry-level large scale data? #476

Open circlecrystal opened 2 years ago

circlecrystal commented 2 years ago

I`m asking for myself, and also my algo team members in company. Currently we got PB level of data, which is separated in parquets across different remote hdfs paths (per day), and need to be trained.

Really wish to get an answer for this question: How well performed is torcharrow for this level of data in industry?

wenleix commented 2 years ago

Thanks for the interests! We have an internal scalable distributed system called Data PreProcessing Service (DPP) [1] that executes traced TorchArrow program at Meta-scale.

It's an open question whether and how we can open source the distributed mode, as DPP has deep integration into Meta's infrastructure. It may be possible to open source just the tracer (thinking PyTorch FX Tracer) with separate integration into OSS big data ecosystem.

Wondering in your use case, is there any preferred big data stack would like to integrate to execute traced TA program? (e.g. Spark, Kafka, Ray, or customized distributed runtime? )

cc @dracifer, @msaroufim, @damianr99

[1] https://arxiv.org/pdf/2108.09373.pdf

circlecrystal commented 2 years ago

Thanks for the interests! We have an internal scalable distributed system called Data PreProcessing Service (DPP) [1] that executes traced TorchArrow program at Meta-scale.

It's an open question whether and how we can open source the distributed mode, as DPP has deep integration into Meta's infrastructure. It may be possible to open source just the tracer (thinking PyTorch FX Tracer) with separate integration into OSS big data ecosystem.

Wondering in your use case, is there any preferred big data stack would like to integrate to execute traced TA program? (e.g. Spark, Kafka, Ray, or customized distributed runtime? )

cc @dracifer, @msaroufim, @damianr99

[1] https://arxiv.org/pdf/2108.09373.pdf

Thanks for taking time to answer my question. Our current stack mostly prefer Spark or Ray to execute distributed program. The difficulty is that, a solution is still missing if we are aiming for training some large model across multiple training containers with large scale training data in pytorch framework.