Tudyx / ai-dataloader

A rust port of pytorch dataloader
Apache License 2.0
22 stars 1 forks source link

Multi-threaded dataloading with tch-rs #1

Open AzHicham opened 1 year ago

AzHicham commented 1 year ago

Hello,

Thank you for your awesome work !!! As you may know there is no dataloader feature in tch-rs, it could be really cool to have it with ai-dataloader and maybe with multi-threading handling in further steps.

Thank you :)

Tudyx commented 1 year ago

Hello, thanks for your comment! I'm currently working on the single threaded version, this should be available soon

Tudyx commented 1 year ago

Mono-threaded tch-rs integration is now available in version 0.4.0 :tada:

nishit-rapidops commented 1 year ago

Hello,

Thank you for your excellent work.

Any idea by what time we can have a data loader with multi-threaded parallelism support?

Thank you

Tudyx commented 1 year ago

I'm currently benchmarking the mono-threaded version against the Pytorch dataloader, multi-thread is definitely the next stop for me. As I'm doing this on my spare times, I can't promise when it will be released, though.

AzHicham commented 1 year ago

Hello :) FYI I’m using I did some benchmark on a company project with the single threaded version vs PyTorch and ai-dataloader is 2x faster than PyTorch. Not sure why yet (it involves http call, png to image conversion + tch-rs) but I’m pretty sure I can improve the speed up with multi threading. I’m currently trying to implement a multithreaded version with rayon but it’s a little bit complexe ^^ If you have any idea about how to do that let me know :)

Tudyx commented 1 year ago

Hello ;) I've also observed 2x speedup against PyTorch in my benchmarks, which are available here.

I'm also sure multi-threaded will improve the speed up, as Rust won't have the same limitation of the Python GIL

I’m currently trying to implement a multithreaded version with rayon but it’s a little bit complexe ^^

Totally agree, I haven't found that much documentation on the subject, other than this tutorial which seems pretty good.

Tudyx commented 1 year ago

Another approach could also be to inspire from the burn parallel datalaoder

AzHicham commented 1 year ago

Nice I'll look into that. Another quickwin in this line I simply used par_iter instead of iter with RAYON_NUM_THREADS=4 And the dataloading took around x4 less time Obviously this is not optimal (no prefetching etc) but a really good first step

Tudyx commented 1 year ago

Nice founding! x4 time speedup could justify adding this solution as an MVP

Tudyx commented 1 year ago

@AzHicham I've implemented your quick-win in https://github.com/Tudyx/ai-dataloader/commit/253f0184f9b0df8a5a9c1ca59628bab5e95a6cf2 . I think to find optimal parallelization it will need more maturation and benchmarking but that's a first step.

AzHicham commented 1 year ago

Hello,

FYI my implementation is slightly different, and I'm not sure your implementation might use multiple thread. In fact using install from rayon will just set a max number of thread in a pool that can be used for a parrallel operation but the for loop is not running in parrallel mode.

Also what I'm trying now is to add prefetching. This way each time next()iteration of the dataloader is called, the fetching might already be done. To achieve that I was thinking using a fixed size queue between ProcessDataLoaderIter and the BatchIterator. WDYT ?

IMO there is 3 way to achieve multithreading. Either we have N threads, and each thread works on a batch. -> Burn approach Or N threads work on the same batch before processing the next one Obviously we can have a combination of both ^^

Tudyx commented 1 year ago

Hello @AzHicham , thanks for your comments.

FYI my implementation is slightly different, and I'm not sure your implementation might use multiple thread. In fact using install from rayon will just set a max number of thread in a pool that can be used for a parrallel operation but the for loop is not running in parrallel mode.

Good catch! I think I will keep the install to be able to setup the number of threads but I will use rayon primitive inside of it to make sure parallelism is used.

Also what I'm trying now is to add prefetching. This way each time next() iteration of the dataloader is called, the fetching might already be done. To achieve that I was thinking using a fixed size queue between ProcessDataLoaderIter and the BatchIterator. WDYT ?

I think it's a great idea! Prefetching is definitely something I wanted to add and any works on this is welcome. Using a fixed-sized queue seems to be fine, I will need to take a closer look to PyTorch implementation to give better insight.

AzHicham commented 1 year ago

Hello @Tudyx

Nice :)

FYI, I have a first working version with prefetching here

It works well (tests OK) but the implementation really suck. I'm still struggling with some traits & static lifetimes ^^

I keep you in touch

Tudyx commented 1 year ago

The multithreaded version should be fixed by https://github.com/Tudyx/ai-dataloader/commit/b7035e2d8c9631f59f82d37a8e3511665ad0e279 , thanks again for spotting the issue.