Closed akarlinsky closed 2 years ago
zaminfluence
can be easily used for clustered standard errors. Just pass in the clustering index to the se_group
argument in ComputeModelInfluence
. (See this line in the examples).
If you're willing to hack, it'd be easy in principle to make different choices in the code for computing standard errors. As long as the computation is differentiable, torch
will take care of the derivatives for you. The code you'd change is here.
The bootstrap would require more work, but is doable in principle. You'd have to say what it meant for the bootstrap samples if you had "left out" a datapoint from the original dataset, though. For example, if a datapoint were resampled three times in a particular sample, but it were left out of the original dataset, how would your bootstrap have changed? But the core idea --- express your quantity of interest as a differentiable function of data weights --- can apply.
Can ZAM be used with alternative derivations of standard errors? robust (HC1,2,3...), cluster, bootstrap, etc.? If so, how?