joelters / ineqopp

Other
0 stars 2 forks source link

Different machine learning algorithms get exactly the same results #1

Open zhangc927 opened 1 year ago

zhangc927 commented 1 year ago

Dear Professor, thank you very much for your excellent work, which makes the measurement of inequality of opportunity more scientific and reasonable. One problem I found with ineqopp is that different machine learning algorithms get exactly the same results.

Use your first example code:

iop_pi1 <- IOp(Y,

X,

est_method = "Plugin",

ineq = c("Gini","MLD"),

plugin_method = "loglin",

ML = "Ridge",

sterr = TRUE,

boots = 500,

IOp_rel = TRUE,

fitted_values = TRUE)

The result is

             Gini          MLD

IOp 0.181472412 0.065250764

se 0.007217505 0.002311231

When the "ML "is set to "RF", the result is:

            Gini                MLD

IOp 0.181472412 0.065250764

se 0.007631641 0.002396757

Only the standard error is different. Is there any problem?

When measured using my data, the results are exactly the same.

joelters commented 1 year ago

Hi, Thanks a lot for your email! Is nice to know someone out there is using the package! I think the problem is that you are setting the estimation method to plug in (est_method = "Plugin"), i.e. you are using the plug in estimator and not the debiased one. Further, it looks that in both cases you are asking to compute the fitted values with a loglinear regression (plugin_method = "loglin"). With this setting the option "ML" is not doing anything since no machine learning is being required.

  1. If you want to use the plug in estimator (just the inequality of the predictions without using any orthogonal moments or cross-fitting, i.e. without debiasing), then set plugin_method = "ML".
  2. If you want to use the debiased estimator set est_method = "Debiased" and the ML option to whatever machine learner you want (as in the last example to compute iop_deb).

Thanks again for your email. I should probably modify the example to get rid of options which don't do anything, such as ML in iop_pi1, I see how that can be confusing. If you need anything else let me know! Best, Joël

El jue, 20 abr 2023 a las 7:23, zhangc927 @.***>) escribió:

Dear Professor, thank you very much for your excellent work, which makes the measurement of inequality of opportunity more scientific and reasonable. One problem I found with ineqopp is that different machine learning algorithms get exactly the same results.

Use your first example code:

iop_pi1 <- IOp(Y,

X,

est_method = "Plugin",

ineq = c("Gini","MLD"),

plugin_method = "loglin",

ML = "Ridge",

sterr = TRUE,

boots = 500,

IOp_rel = TRUE,

fitted_values = TRUE)

The result is

         Gini          MLD

IOp 0.181472412 0.065250764

se 0.007217505 0.002311231

When the "ML "is set to "RF", the result is:

        Gini                MLD

IOp 0.181472412 0.065250764

se 0.007631641 0.002396757

Only the standard error is different. Is there any problem?

When measured using my data, the results are exactly the same.

— Reply to this email directly, view it on GitHub https://github.com/joelters/ineqopp/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKMBGGFH7NRJA5P2K5QVYLLXCETCJANCNFSM6AAAAAAXFNNWFE . You are receiving this because you are subscribed to this thread.Message ID: @.***>