Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Apache License 2.0
2.39k
stars
248
forks
source link
how to split the whole dataset into each gpu in multi-gpus training? #375
how to split the whole dataset into each gpu? when multi-gpu training I find that each gpu has to go over the whole dataset, while some other repos usually split the whole dataset into each gpus.
My concern is that when each gpu has to go over the whole dataset, it does not matter whether I use how many gpus to train, the whole training time is the same!
Could some one please tell me how to split the whole dataset into each gpus? Thansk so much!
how to split the whole dataset into each gpu? when multi-gpu training I find that each gpu has to go over the whole dataset, while some other repos usually split the whole dataset into each gpus.
My concern is that when each gpu has to go over the whole dataset, it does not matter whether I use how many gpus to train, the whole training time is the same!
Could some one please tell me how to split the whole dataset into each gpus? Thansk so much!