-
# Cleaned up list
## Shrinkage estimators
**Linear Shrinkage**
- [x] [Ledoit Wolf, 2004](http://www.ledoit.net/honey.pdf), generic LW estimators
- [x] [Schafer and Strimmer, 2005](http://strim…
-
In Schafer and Strimmer (and other papers) they also discuss variance shrinkage and then they tend to combine both shrinkage of the sample covariance with shrinkage of the variance (e.g. in corpcor). …
-
```
n_bases=2
softGBM = SoftGradientBoostingRegressor(
estimator=MLP,
n_estimators=n_bases,
shrinkage_rate=1.00,
…
-
Similar to the [scikit-learn module](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.covariance). I'm primarily interested in implementing the Graphical Lasso, but empirical and sh…
-
getting started with a wishlist for outlier robust multivariate location and scatter estimators
#3220 size (overall scaling)
- MCD in scikit-learn, not good with high contamination and large k_vars
-…
-
For gene ranking (for GSEA, for example) and visualizations (on volcano plots, for example), DESeq author Michael Love suggests shrinking of the effect sizes (log-fold changes) with the `lfcShrink` fu…
-
Hu everyone,
I am trying to script the ensemble, however, argsvar cannot be used with torchscript
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of argument…
-
(issue mainly to park an article that might have good general theory in MLE context, not read yet)
related PR #1665 TheilGLS, generalized Ridge for linear model
Hansen, Bruce E. 2016. “Efficient Shr…
-
(mainly parking a reference for an old idea)
variance and covariance estimates are not very good in small or very small samples.
One idea is to use penalized or shrinkage (co)variance to get bette…
-
shrinking the endog is another principle that allows reuse of existing methods for robust regression. This is similar to winsorizing and an alternative to trimming or dropping outliers (e.g. #3273 #9…