-
```
n_bases=2
softGBM = SoftGradientBoostingRegressor(
estimator=MLP,
n_estimators=n_bases,
shrinkage_rate=1.00,
…
-
# Cleaned up list
## Shrinkage estimators
**Linear Shrinkage**
- [x] [Ledoit Wolf, 2004](http://www.ledoit.net/honey.pdf), generic LW estimators
- [x] [Schafer and Strimmer, 2005](http://strim…
-
In Schafer and Strimmer (and other papers) they also discuss variance shrinkage and then they tend to combine both shrinkage of the sample covariance with shrinkage of the variance (e.g. in corpcor). …
-
According to the [extended shrinkage estimator section](http://bioconductor.org/packages/3.9/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#moreshrink) in the DESeq2 vignette, apeglm has better performanc…
daler updated
4 years ago
-
Similar to the [scikit-learn module](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.covariance). I'm primarily interested in implementing the Graphical Lasso, but empirical and sh…
-
Hi - what do you mean in the below (found on your README) ?
> Ledoit-Wolf Estimator from [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.covariance.LedoitWolf.html) use shrinka…
-
### Describe the workflow you want to enable
Currently, Scikit-learn's LinearDiscriminantAnalysis (LDA) classifier does not support incremental learning through the partial_fit method. This poses c…
-
Sparse Laplacian Shrinkage combines a L1 based penalty and a quadratic informative penalty, similar to glm-net but with structured L2 penalization matrix
Sparse Laplacian Shrinkage is the first stran…
-
Currently, `corrected` is a `true/false` variable that allows using `n-1` or `n` in the denominator. However, it's possible to use other denominators to get a shrinkage estimator, which can sometimes …
-
Hi,
referring to the list in issue #8 ,
i implemented the classical Tyler's M-estimator (1987) and the shrinked version proposed by Zhang and Wiesel (2016), with both the Ledoit & Wolf-type of sh…