Closed Pearcekieser closed 1 year ago
Hi @Pearcekieser,
I really like this. My only suggestion is to change throw_on_fail
to the more pythonic raise_on_fail
.
Otherwise, well done! Thanks for including a test, too
Updated throw_on_fail
to raise_on_fail
Ty, my java was showing 😅
Thanks!
Problem Statement
cph.check_assumptions
can be tricky to use because it does not return something that can be used to infer the outcome of checking the assumptions (it returns axes ref)Solution
I added an input parameter
throw_on_fail
where if True, when the check_assumptions fails the function will raise an error so the calling program can take appropriate action (eg discard the model results for that input / output pair).Other solutions considered
proportional_hazard_test()
to do a similar check but still return the data in a way where I can detect if the assumption check failed. I figured it'd be cool to add a fix to the library instead.test_results
table generated byproportional_hazard_test()
- this might break some expected graphing behavior if used in notebooks. I think this might be the best result bc it gives the library user the most flexibility, but its a more significant change so I'll await your input. Adding thethrow_on_fail
I think still makes sense even if the return is adjusted to return test_results ifshow_plots==False
.Testing
added unit test to verify this functionality. Ran the test + output below:
(This is my first pull request, to lifelines. I did my best to follow the contributed guide. I appreciate your patience if I missed something.)