Closed brash6 closed 7 months ago
Thank you for this PR @brash6 ! As for some of the warnings that persist in both CIs that still have issues due to the test, I see as well that some operations
divide by zero encountered in divide
np.sum(t * (1 - p_xm) / (p_xm * (1 - p_x)))
I don't know if it is a big deal or if we should address it so as to close the issue #48 as well.
Do we know so far why we obtain sometimes NaN as outputs in the test test_total_is_direct_plus_indirect
?
Thank you again for your time and for this work!
'DataFrame' object has no attribute 'iteritems'
error is addressed by adding this line pd.DataFrame.iteritems = pd.DataFrame.items
in the r_dependency_required
decorator. Hence the pandas version is no longer fixed in the setup file @bthirion The last failing test is this one due to division by zero as @houssamzenati mentioned earlier :
--test_total_is_direct_plus_indirect[mediation_ipw_forest_cf-dict_param2] --
effects_chap = (nan, nan, 1.2194361538090226, nan, nan)
def test_total_is_direct_plus_indirect(effects_chap):
if not np.isnan(effects_chap[1]):
assert effects_chap[0] == pytest.approx(
effects_chap[1] + effects_chap[4])
if not np.isnan(effects_chap[2]):
> assert effects_chap[0] == pytest.approx(
effects_chap[2] + effects_chap[3])
E assert nan == nan ± ???
E
E comparison failed
E Obtained: nan
E Expected: nan ± ???
src/tests/estimation/test_get_estimation.py:131: AssertionError
As discussed last Friday, is it ok for you to solve it in this PR ? @judithabk6
- Failing tests due to tolerance thresholds are solved
- The
'DataFrame' object has no attribute 'iteritems'
error is addressed by adding this linepd.DataFrame.iteritems = pd.DataFrame.items
in ther_dependency_required
decorator. Hence the pandas version is no longer fixed in the setup file @bthirionThe last failing test is this one due to division by zero as @houssamzenati mentioned earlier :
--test_total_is_direct_plus_indirect[mediation_ipw_forest_cf-dict_param2] -- effects_chap = (nan, nan, 1.2194361538090226, nan, nan) def test_total_is_direct_plus_indirect(effects_chap): if not np.isnan(effects_chap[1]): assert effects_chap[0] == pytest.approx( effects_chap[1] + effects_chap[4]) if not np.isnan(effects_chap[2]): > assert effects_chap[0] == pytest.approx( effects_chap[2] + effects_chap[3]) E assert nan == nan ± ??? E E comparison failed E Obtained: nan E Expected: nan ± ??? src/tests/estimation/test_get_estimation.py:131: AssertionError
As discussed last Friday, is it ok for you to solve it in this PR ? @judithabk6
Great! Thanks a lot. Judith if needed I will have time Thursday to work on this issue.
yes, on it! so working on clipping for this.
I have just pushed clipping. It seems to fix tests.
I have run some experiments to check the impact. Here are some plots of the estimation without clipping (in absciss), and the version with 1e-6 clipping in ordinate for all variations, for the indirect effect
I think it is ok, wdyt @bthirion? Also there is no clipping for DML, if I remember correctly, it failed from time to time for the same reason (division by 0) with forests, I add clipping right?
I think it is OK. Thx ! For DML: I would add clipping for consistency.
You need to catch the error for the CI-without-R
All checks have passed, can we merge this PR ? @bthirion @judithabk6 @houssamzenati @zbakhm
Hi, I have left a couple of comments. If you don't want to address it here, simply open a new issue. Best,
mediation_DML
is renamed to mediation_dml
DependencyNotInstalledError
is raised, the test of the corresponding estimator is then skippedTODO :
FYI @bthirion @judithabk6 @houssamzenati
For code coverage, I need to add the repository to my Codecov account, and set the Codecov token in the GitHub repository secrets via the repository settings. But the problem is that I don't have access to the repository in Codecov, or to the repository settings.
@judithabk6 Can you please create a Codecov account and set the Codecov token in the GitHub repository secrets? This way I can configure the CI using the Codecov token.
@zbakhm done (I think). Let me know if that doesn't work. Thanks :)
Do you know why the CI is failing ?
The CI is failing because I am testing the code coverage feature, and it is giving errors.
@judithabk6 I think we have to make a call together to configure the code coverage feature, because I don't have access to the repo parameters on Github and to the repo page on Codecov.
Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.
Thanks for integrating Codecov - We've got you covered :open_umbrella:
LGTM !
Apparently the code coverage badge is working with codecov Can we merge this PR now ? @zbakhm @bthirion @judithabk6 @houssamzenati
In this PR :