Open zhangc927 opened 1 year ago
Hi, Thanks a lot for your email! Is nice to know someone out there is using the package! I think the problem is that you are setting the estimation method to plug in (est_method = "Plugin"), i.e. you are using the plug in estimator and not the debiased one. Further, it looks that in both cases you are asking to compute the fitted values with a loglinear regression (plugin_method = "loglin"). With this setting the option "ML" is not doing anything since no machine learning is being required.
Thanks again for your email. I should probably modify the example to get rid of options which don't do anything, such as ML in iop_pi1, I see how that can be confusing. If you need anything else let me know! Best, Joël
El jue, 20 abr 2023 a las 7:23, zhangc927 @.***>) escribió:
Dear Professor, thank you very much for your excellent work, which makes the measurement of inequality of opportunity more scientific and reasonable. One problem I found with ineqopp is that different machine learning algorithms get exactly the same results.
Use your first example code:
iop_pi1 <- IOp(Y,
X,
est_method = "Plugin",
ineq = c("Gini","MLD"),
plugin_method = "loglin",
ML = "Ridge",
sterr = TRUE,
boots = 500,
IOp_rel = TRUE,
fitted_values = TRUE)
The result is
Gini MLD
IOp 0.181472412 0.065250764
se 0.007217505 0.002311231
When the "ML "is set to "RF", the result is:
Gini MLD
IOp 0.181472412 0.065250764
se 0.007631641 0.002396757
Only the standard error is different. Is there any problem?
When measured using my data, the results are exactly the same.
— Reply to this email directly, view it on GitHub https://github.com/joelters/ineqopp/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKMBGGFH7NRJA5P2K5QVYLLXCETCJANCNFSM6AAAAAAXFNNWFE . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Dear Professor, thank you very much for your excellent work, which makes the measurement of inequality of opportunity more scientific and reasonable. One problem I found with ineqopp is that different machine learning algorithms get exactly the same results.
Use your first example code:
iop_pi1 <- IOp(Y,
X,
est_method = "Plugin",
ineq = c("Gini","MLD"),
plugin_method = "loglin",
ML = "Ridge",
sterr = TRUE,
boots = 500,
IOp_rel = TRUE,
fitted_values = TRUE)
The result is
IOp 0.181472412 0.065250764
se 0.007217505 0.002311231
When the "ML "is set to "RF", the result is:
IOp 0.181472412 0.065250764
se 0.007631641 0.002396757
Only the standard error is different. Is there any problem?
When measured using my data, the results are exactly the same.