-
Hi,
referring to the list in issue #8 ,
i implemented the classical Tyler's M-estimator (1987) and the shrinked version proposed by Zhang and Wiesel (2016), with both the Ledoit & Wolf-type of sh…
-
@fmerino21 had shared with me some days ago that one of the UOVEstimator doctest was failing when running with Python instead of Sage.
The failing doctest is this one: https://github.com/Crypto-TI…
-
Hi,
Thanks for sharing this great work.
I am trying to apply different loss functions. However, I have not found anything related to M-estimator? Can you help providing some information about M…
-
Hi all,
The n_estimators value on the best model (`automl.model`) provided by FLAML does not seem to be set correctly for CatBoostClassifiers.
Example code here:
```
from flaml import AutoML…
-
For implementation it looks like we could reuse sandwich covariance.
For RLM: H1 is the analog of OLS "nonrobust" covariance
I haven't figured out if H3 is HC, it has a summation term that looks simi…
-
Dear Opt people,
I am aware that one m-estimator example is given in robust non-rigid alignment example by weighting each energy terms and the weights calculated in c++. I am wondering whether this…
-
**edit** see #4732 for updated information and implementation
(a random find)
Hosmer,Lemeshow, and May in chapter 8 refer to a influence function definition of Cooks Distance and use score resid…
-
**Describe the feature you'd like to have**
At some point, I had implemented the Good-Turing probabilities estimator. I'm not sure if it disappeared from the list of estimators, or if I forgot to p…
-
Hello Professor Chopin,
Thanks for the excellent package and your hard work! Your write up of the package and ease of modification is incredible.
I ran into a bug when I was trying to employ the…
-
Hello,
When I came across this package, I was SUPER excited because this has everything I was looking for and more!
FIML, 3-step for covariates, and even BCH!
However I'm running into a problem…