Closed shaddyab closed 3 years ago
Thanks for reaching out, as ElasticNetPropensityModel is a child class of LogisticRegressionPropensityModel, it passes in parameters such as 'penalty': 'elasticnet' in model_()
to call sklearn's LogisticRegressionCV
@paullo0106 , thank you for taking the time to answer my question. You answer makes sense. However, when further examining the fit function and the init, I was unable to find how to pass the propensity_model argument to the xlearner. In init it will always default to None (i.e., it is not an input arg) while in the fit function it will be set to the default ElasticNetPropensityModel model if p is None.
right, so the current examples for these meta-learners is that you can pass in the pre-calculated propensity to fit()
, fit_predict()
, estimate_ate()
, or by default it will use ElasticNetPropensityModel or other models in compute_propensity_score()
to calculate the scores.
I guess your suggestion here is to have PropensityModel as a passable argument when initializing meta-learners and calling compute_propensity_score()
during the training, say a user wants to use GradientBoostedPropensityModel or other ones?
cc: @ppstacy
@paullo0106 Exactly! It would be nice to have the PropensityModel as a passable argument.
Hi @shaddyab. Currently, you should pass propensity scores, p
, directly to X-learner's fit()
and/or predict()
instead of the propensity_model
. The rational isthat propensity modeling has a different scope, and is better to be separated from uplift modeling.
@jeongyoonlee This makes sense. I posted this question as I a trying to figure out how to 1) tune the X-learner, 2) specify the propensity scores for experimental data ( I already posted two separate questions here and here.
In summary:
Hope this helps.
@jeongyoonlee, this definitely helps. Would it also be possible for you to weigh in on the recommended approach for tuning posted here?
As a side note, I spent some time figuring out how to integrate the current code base with sklearn CV function (e.g., cross_validate). The main issue is when p is provided to the fit function using the 'fit_params' argument then there is no easy way to also pass it to the predict function to complete the cv evaluation process, and you will get an error message. The reason is that the default p for the predict function is None and no propensity_model was fitted during the fit step. I was able to figure out a few work arounds, but they are not yet ready for a PR. One solution is to encapsulate the three main variables needed for modeling (treatment, y, and p) into a single variable (Something that I noticed n the pylift package) that is passed to the fit and predict functions. This will enable an easier integration with sklearn that usually expect two inputs to the fit and predict functions (e.g., X and Y.)
Based on the inference/meta/xlearner.py fit function docstr when p (propensity score) is None then ElasticNetPropensityModel() is used to generate the propensity scores. However, it appears to me that the ElasticNetPropensityModel in causalml/propensity.py is empty. Is this is a bug or am I missing something? Because I don't get an error message when fitting a xlearner learner
pass