This is an experimental feature that explores how to better support the training on heterogeneous base estimators in Ensemble-Pytorch. Unlike using base estimators of the same type, the training protocals on different base estimators can be quite different, therefore many methods used in existing ensembles cannot well support heterogeneous base estimators (e.g., set_optimizer, set_scheduler, fit).
An alternative solution is to enabling users to fit each base estimator in an incremental way. Specifically, the workflow below shows one way to support heterogeneous base estimators:
Declare the ensemble (no base estimator included)
For m = 1 to M:
Set the m-th base estimator
Set the optimizer and scheduler for the m-th base estimator
Fit the m-th base estimator
Append the fitted base estimator onto the internal container on base estimators
Obviously, there is no way to support parallelization under this paradigm, but I think it is acceptable as state-of-the-art deep learning models typically have a bottleneck on computation, which makes parallelization less useful when only one GPU is available.
In addition, users can also pop base estimators from the ensemble if they want.
Related issue: #5
This is an experimental feature that explores how to better support the training on heterogeneous base estimators in Ensemble-Pytorch. Unlike using base estimators of the same type, the training protocals on different base estimators can be quite different, therefore many methods used in existing ensembles cannot well support heterogeneous base estimators (e.g.,
set_optimizer
,set_scheduler
,fit
).An alternative solution is to enabling users to fit each base estimator in an incremental way. Specifically, the workflow below shows one way to support heterogeneous base estimators:
Obviously, there is no way to support parallelization under this paradigm, but I think it is acceptable as state-of-the-art deep learning models typically have a bottleneck on computation, which makes parallelization less useful when only one GPU is available.
In addition, users can also pop base estimators from the ensemble if they want.