DoubleML / doubleml-for-py

DoubleML - Double Machine Learning in Python
https://docs.doubleml.org
BSD 3-Clause "New" or "Revised" License
480 stars 72 forks source link

[Feature Request]: DoubleMPLR weights support #248

Closed wmaucla closed 3 months ago

wmaucla commented 3 months ago

Describe the feature you want to propose or implement

For the IRM model, there is a parameter called weights. We would like a similar weights parameter for the PLR model. We’re looking for something similar to what was found hereFor the IRM model, there is a parameter called weights. We would like a similar weights parameter for the PLR model. We’re looking for something similar to what was found here

Propose a possible solution or implementation

Allow for user-inputted weights to be used with the PLR model (similar to the IRM model). The default weights can be the same as the IRM models’.

Did you consider alternatives to the proposed solution. If yes, please describe

No response

Comments, context or references

Thanks for the consideration! We love DoubleML :)

SvenKlaassen commented 3 months ago

Thank you very much for the suggestion. I am not quite sure which causal quantities you hope to identify by using weights in the PLR. Just as a reference: The weighted IRM estimates a weighted average treatment effect:

$$ \theta_0 = \mathbb{E}[(g(1,X) - g(0,X))\omega(Y,X,D)]$$

Maybe you can elaborate more on your idea or usecase.

wmaucla commented 3 months ago

Hi Sven, Thank you for the reply. We are using DoubleML for a marketing effect use case, and would like to weight each of our individual data points by a demographics-related correction factor. We have used the weights parameter in the IRM model to do this in the past, but would like the ability to do the same with an updated model that uses PLR.

SvenKlaassen commented 3 months ago

Thank you. Would it still be possible for you to use weights base on the CATE and define the the basis based on your weights?

wmaucla commented 3 months ago

Hi Sven, thanks for your followup. We've decided to switch our methodology. Thanks for the help!