Closed LamAdr closed 4 months ago
Thanks! Are the tests hard-coded numbers verified against R?
FYI, I'm not planning to allow hypotheses()
to process slopes or comparisons or predictions objects. I regret this design choice in R. Users should just use the hypothesis
argument instead of a separate function call.
Are the tests hard-coded numbers verified against R?
You mean as in e.g.
hypo_py = hypotheses(mod, joint=[0, 1, 2], hypothesis=[1, 2, 3])
? If so, yes, I generated a csv using the corresponding R call.
I'm not planning to allow hypotheses() to process slopes or comparisons or predictions objects.
In that case maybe we should rename the input obj
to model
or mod
?
Let's do exact parallelism in argument names between R and Python, even if it feels a bit weird.
Let me know when I can merge. Looks great!
alright, I think you can merge.
great thanks!
Here is my go at joint hypotheses testing. It seems pymarginaleffects does not support making hypotheses about
comparisons
/slopes
objects yet, and I didn't change that. I made changes to two other files :print_head
to matchR
's outputget_variables_names()
now keeps the variables in the order they appear in the formula. That's useful if users want to specify which variable they're hypothesizing on based on their indices.get_df
functionOne potentially problematic assumption is that the presence of the intercept is verified with
len(theta_hat) == len(var_names)+1
.Thanks