Closed lhjohn closed 1 year ago
Various method could speed up model training by spreading training workload across multiple worker nodes.
Feature not yet available: mlverse/torch has an open issue for implementing DataParallel.
DataParallel
Duplicate of https://github.com/OHDSI/DeepPatientLevelPrediction/issues/41
Various method could speed up model training by spreading training workload across multiple worker nodes.
Data parallelism
Feature not yet available: mlverse/torch has an open issue for implementing
DataParallel
.Model parallelism