Open egillax opened 2 years ago
It would be good to be able to use multiple-GPUs to train your model. Especially for bigger model such as Transformers and models using temporal data. This is something me and @ted9219 briefly talked about during the OHDSI symposium.
Various method could speed up model training by spreading training workload across multiple worker nodes.
Feature not yet available: mlverse/torch has an open issue for implementing DataParallel.
DataParallel
It would be good to be able to use multiple-GPUs to train your model. Especially for bigger model such as Transformers and models using temporal data. This is something me and @ted9219 briefly talked about during the OHDSI symposium.