Learned today about something called "ensemble methods" for supervised learning. Given the scenario that we have several weaker predictors with largely orthogonal features, we can use ensemble methods to aggregate these weaker predictors into a stronger predictor.
The standard method is via "bagging," which, given an input, "polls" all predictors and averages together their result. Bagging places equal weights on each predictor while polling.
Thought it might be useful to think about in case our individual models separately aren't achieving improved too much improved performance.
Learned today about something called "ensemble methods" for supervised learning. Given the scenario that we have several weaker predictors with largely orthogonal features, we can use ensemble methods to aggregate these weaker predictors into a stronger predictor.
The standard method is via "bagging," which, given an input, "polls" all predictors and averages together their result. Bagging places equal weights on each predictor while polling.
Thought it might be useful to think about in case our individual models separately aren't achieving improved too much improved performance.