Open nectario opened 4 years ago
You can use p3.8xlarge and above for parallel processing across multiple gpus
You need to set multi_gpu flag to true
Thank you. Which config file do I set this at?
@nectario You would set this in the initialization of BertDataBunch.
Is it possible to speedup BERT training by using multiple training instances?