boost-R / mboost

Boosting algorithms for fitting generalized linear, additive and interaction models to potentially high-dimensional data. The current relase version can be found on CRAN (http://cran.r-project.org/package=mboost).
73 stars 27 forks source link

Is base learner (component-wise)simple OLS in `glmboost` for any family #103

Closed BruceChen2017 closed 4 years ago

BruceChen2017 commented 4 years ago

Just for confirmation.If it is, which part of source code reveales this?By the way, what is the usage of loss agrument in Family object. Thanks in advance!

fabian-s commented 4 years ago

yes, it is OLS (but it can also do a ridge-penalized fit) this function sets up the design matrix for bols learners: https://github.com/boost-R/mboost/blob/84712fcb27511f45ae6664078dd772d1750c8549/R/bl.R#L92

the loss-argument defines the loss function -- as the manual says "an optional loss function with arguments y and f." y are the responses, f is the additive predictor/model prediction

BruceChen2017 commented 4 years ago

@fabian-s Thanks. What does "optional" means? If I understand correctly, such loss function would be used to calculate AIC.Such understanding comes after I read this paper Boosting Algorithms:Regularization,Prediction and Model Fitting

fabian-s commented 4 years ago

there's a misunderstanding -- this forum is for reporting bugs or feature requests for mboost, not for teaching people about boosting. better forums for that kind of question: crossvalidated or any of the subreddits devoted to stats or machine learning. good luck!