PSLmodels / OG-Core

An overlapping generations model framework for evaluating fiscal policies.
https://pslmodels.github.io/OG-Core/
Creative Commons Zero v1.0 Universal
68 stars 119 forks source link

Update GH Actions and CodeCov #922

Closed rickecon closed 7 months ago

rickecon commented 7 months ago

This PR:

cc: @jdebacker @talumbau

codecov-commenter commented 7 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 73.44%. Comparing base (8f25b15) to head (dd1f3f7).

Additional details and impacted files [![Impacted file tree graph](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922/graphs/tree.svg?width=650&height=150&src=pr&token=98mQCVhspd&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels)](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels) ```diff @@ Coverage Diff @@ ## master #922 +/- ## ======================================= Coverage 73.44% 73.44% ======================================= Files 19 19 Lines 4610 4610 ======================================= Hits 3386 3386 Misses 1224 1224 ``` | [Flag](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels) | Coverage Δ | | |---|---|---| | [unittests](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels) | `73.44% <100.00%> (ø)` | | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels#carryforward-flags-in-the-pull-request-comment) to find out more. | [Files](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922?dropdown=coverage&src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels) | Coverage Δ | | |---|---|---| | [ogcore/SS.py](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922?src=pr&el=tree&filepath=ogcore%2FSS.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels#diff-b2djb3JlL1NTLnB5) | `72.41% <ø> (ø)` | | | [ogcore/TPI.py](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922?src=pr&el=tree&filepath=ogcore%2FTPI.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels#diff-b2djb3JlL1RQSS5weQ==) | `35.40% <ø> (ø)` | | | [ogcore/\_\_init\_\_.py](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922?src=pr&el=tree&filepath=ogcore%2F__init__.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels#diff-b2djb3JlL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | | | [ogcore/demographics.py](https://app.codecov.io/gh/PSLmodels/OG-Core/pull/922?src=pr&el=tree&filepath=ogcore%2Fdemographics.py&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=PSLmodels#diff-b2djb3JlL2RlbW9ncmFwaGljcy5weQ==) | `71.56% <ø> (ø)` | |
rickecon commented 7 months ago

@jdebacker. This looks like a weird Dask issue. The failure in Linux Python 3.10 in test_SS.py is the following. I am going to rerun all the failed tests and see if it passes, because Linux Python 3.9 passed.

FAILED tests/test_SS.py::test_SS_solver[Baseline, budget balance] - concurrent.futures._base.CancelledError: root-726ee2e9-1618-400e-8c1a-a41ae77898ef
rickecon commented 7 months ago

@jdebacker @talumbau . I reran all the failed jobs, and Linux Python 3.9 passed with no dask concurrent futures error. On this second round, it was Mac Python 3.11 that failed. So I have rerun the jobs again to see if they will pass. My guess is that they will. But this is at least as annoying as the Codecov issue that this PR fixes.

rickecon commented 7 months ago

@jdebacker and @talumbau. Yep. It just took two rounds of rerunning the CI tests to get past those concurrent futures errors. I think this PR is ready for review.

I do still think that we have too many tests for OG-Core. I have this sneaking suspicion that some of these errors that go away by rerunning jobs come from implicit compute limits on GH Actions. It seems excessive for us to test Python 3.9, 3.10, and 3.11. I would prefer we only tested Python 3.10 and 3.11.

jdebacker commented 7 months ago

@rickecon Yes, perhaps we should drop at least the 3.9 tests. I think I'm ok with that.

But I do also think we should minimize running tests when not needed. Here's what I posted over at the OG-USA that you might add to this PR:

Re my comments a few weeks ago about unnecessary compute time, from a read of the updated workflow docs, I think we should be able to condition tests to run only when relevant files are affected.

E.g., something like:

on:
  [push, pull_request]:
    paths:
      - './ogusa/**.py'
      -  './tests/**.py'
rickecon commented 7 months ago

@jdebacker. I have removed the draft status of this PR and have made all the commits that I think this one needs. It is ready for your review and is ready to merge as soon as all the tests pass. I also have the full battery of tests running locally on my machine.

rickecon commented 7 months ago

@jdebacker. This PR had a concurrent futures failure in test_SS.py. I reran the failed tests and got Linux Python 3.9, 3.10, and 3.11 to pass. I am rerunning these tests now to see if I can get more of them to pass. I suspect this is an issue that @talumbaugh's PRs may fix.

rickecon commented 7 months ago

@jdebacker. All tests are passing now on GitHub. However, I had 20 tests fail on the full run of my local machine. Here are the results of my full set of local tests. 15 of these failures are concurrent.futures._base.CancelledError, which may be a temporary dask issue, although I haven't seen these errors at this scale before. Of the remaining 5 errors, three are those test_txfunc.py tests that differ across machines (bad tests or bad code). And the other two are in test_demographics.py and may reflect the tests not being up-to-date with the updates we have made to the demographics.py module.

I am going to re-run these tests locally overnight tonight and see if the concurrent.futures errors go away.

(ogcore-dev) richardevans@Richards-MacBook-Pro-2 OG-Core % pytest
=================================================== test session starts ====================================================
platform darwin -- Python 3.11.8, pytest-8.1.1, pluggy-1.4.0
rootdir: /Users/richardevans/Docs/Economics/OSE/OG-Core
configfile: pytest.ini
testpaths: ./tests
plugins: cov-5.0.0, anyio-4.3.0, xdist-3.5.0
collected 536 items                                                                                                        

tests/test_SS.py .............................F.....                                                                 [  6%]
tests/test_TPI.py ............F.....FFFF.FFFF                                                                        [ 11%]
tests/test_aggregates.py .....................................                                                       [ 18%]
tests/test_basic.py FF..                                                                                             [ 19%]
tests/test_demographics.py .....F.F........                                                                          [ 22%]
tests/test_elliptical_u_est.py .......                                                                               [ 23%]
tests/test_execute.py F                                                                                              [ 23%]
tests/test_firm.py .....................................................................                             [ 36%]
tests/test_fiscal.py ...................                                                                             [ 40%]
tests/test_household.py ..............................................                                               [ 48%]
tests/test_output_plots.py ..............................................                                            [ 57%]
tests/test_output_tables.py ..............                                                                           [ 59%]
tests/test_parameter_plots.py ........................................                                               [ 67%]
tests/test_parameter_tables.py .......                                                                               [ 68%]
tests/test_parameters.py ..............                                                                              [ 71%]
tests/test_run_example.py ..                                                                                         [ 71%]
tests/test_run_ogcore.py F                                                                                           [ 71%]
tests/test_tax.py ......................................                                                             [ 78%]
tests/test_txfunc.py .....F.......F..........F.                                                                      [ 83%]
tests/test_user_inputs.py F........                                                                                  [ 85%]
tests/test_utils.py ..............................................................................                   [100%]
================================================= short test summary info ==================================================
FAILED tests/test_SS.py::test_run_SS[Reform, use zeta] - concurrent.futures._base.CancelledError: root-055b8346-ccbd-47e5-bfe3-52a3eeae7987
FAILED tests/test_TPI.py::test_run_TPI_full_run[Baseline, small open some periods] - concurrent.futures._base.CancelledError: root-4b9cc401-8c57-48a6-8ee6-d7e018084aa1
FAILED tests/test_TPI.py::test_run_TPI[Baseline] - concurrent.futures._base.CancelledError: root-320f378b-81e4-4ce3-9ca9-0dda61e4dd66
FAILED tests/test_TPI.py::test_run_TPI[Reform] - concurrent.futures._base.CancelledError: root-eea042f6-0281-411d-b0b2-cac372b0077e
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, balanced budget] - concurrent.futures._base.CancelledError: root-e5df3b8e-0b98-49c2-ac0e-db5bdfe45e99
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, small open] - concurrent.futures._base.CancelledError: root-6edb82fb-402b-41bd-9e2a-42b091bb8bb3
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, delta_tau = 0] - concurrent.futures._base.CancelledError: root-2dd87d34-03b2-4ee0-af2b-84f94b4f752a
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline] - concurrent.futures._base.CancelledError: root-87f6771f-58ab-4a9d-97ce-db4368458d5b
FAILED tests/test_TPI.py::test_run_TPI_extra[Reform, baseline spending] - concurrent.futures._base.CancelledError: root-f29bfc5f-2d2d-4969-b6ae-6857e557b5ea
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, Kg>0] - concurrent.futures._base.CancelledError: root-4f4c1d44-ca78-44b9-934e-965647a9c682
FAILED tests/test_basic.py::test_run_small[SS] - concurrent.futures._base.CancelledError: root-bbd4ff45-00bd-445f-922d-8066481e63b3
FAILED tests/test_basic.py::test_run_small[TPI] - concurrent.futures._base.CancelledError: root-218a4746-4b97-409a-8f6f-75c409a4d9ab
FAILED tests/test_demographics.py::test_get_fert - TypeError: unsupported operand type(s) for -: 'list' and 'int'
FAILED tests/test_demographics.py::test_infant_mort - assert array([0.00477758]) == 0.00491958
FAILED tests/test_execute.py::test_runner_baseline_reform - concurrent.futures._base.CancelledError: root-e821f6ba-3211-45d0-ae77-5e993d502e4d
FAILED tests/test_run_ogcore.py::test_run_micro_macro - concurrent.futures._base.CancelledError: root-f59b0866-3ecf-4b89-8442-778aa623dfb6
FAILED tests/test_txfunc.py::test_txfunc_est[DEP] - assert False
FAILED tests/test_txfunc.py::test_tax_func_loop - assert False
FAILED tests/test_txfunc.py::test_tax_func_estimate - assert False
FAILED tests/test_user_inputs.py::test_frisch[Frisch 0.32] - concurrent.futures._base.CancelledError: root-6073dc31-f57a-4ebe-93a9-f4c02a53fc56
=============================== 20 failed, 516 passed, 15179 warnings in 26276.40s (7:17:56) ===============================

Here is the full traceback of the first test_SS.py concurrent.futures error.

______________________________________________ test_run_SS[Reform, use zeta] _______________________________________________

tmpdir = local('/private/var/folders/d4/trj3dssd6s3g8kxvjmczz11w0000gn/T/pytest-of-richardevans/pytest-0/test_run_SS_Reform__use_zeta_0')
baseline = False, param_updates = {'initial_guess_TR_SS': 0.06, 'initial_guess_r_SS': 0.06, 'use_zeta': True}
filename = 'run_SS_reform_use_zeta.pkl'
dask_client = <Client: 'tcp://127.0.0.1:60025' processes=7 threads=14, memory=64.00 GiB>

    @pytest.mark.parametrize(
        "baseline,param_updates,filename",
        [
            (True, param_updates1, filename1),
            (False, param_updates9, filename9),
            (True, param_updates2, filename2),
            (False, param_updates10, filename10),
            (True, param_updates3, filename3),
            # (True, param_updates4, filename4),
            (False, param_updates5, filename5),
            (False, param_updates6, filename6),
            (False, param_updates7, filename7),
            # (False, param_updates8, filename8),
            (False, param_updates11, filename11),
            (True, param_updates12, filename12),
            (True, param_updates13, filename13),
            (True, param_updates14, filename14),
        ],
        ids=[
            "Baseline",
            "Reform, baseline spending",
            "Baseline, use zeta",
            "Reform, baseline spending, use zeta",
            "Baseline, small open",
            # "Baseline, small open use zeta",
            "Reform",
            "Reform, use zeta",
            "Reform, small open",
            # "Reform, small open use zeta",
            "Reform, delta_tau=0",
            "Baseline, non-zero Kg",
            "Baseline, M=3, non-zero Kg",
            "Baseline, M=3, zero Kg",
        ],
    )
    @pytest.mark.local
    def test_run_SS(tmpdir, baseline, param_updates, filename, dask_client):
        # Test SS.run_SS function.  Provide inputs to function and
        # ensure that output returned matches what it has been before.
        SS.ENFORCE_SOLUTION_CHECKS = True
        # if running reform, then need to solve baseline first to get values
        baseline_dir = os.path.join(tmpdir, "OUTPUT_BASELINE")
        if baseline is False:
            p_base = Specifications(
                output_base=baseline_dir,
                baseline_dir=baseline_dir,
                baseline=True,
                num_workers=NUM_WORKERS,
            )
            param_updates_base = param_updates.copy()
            param_updates_base["baseline_spending"] = False
            p_base.update_specifications(param_updates_base)
>           base_ss_outputs = SS.run_SS(p_base, client=dask_client)

tests/test_SS.py:1149: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
ogcore/SS.py:1217: in run_SS
    sol = opt.root(
/opt/anaconda3/envs/ogcore-dev/lib/python3.11/site-packages/scipy/optimize/_root.py:236: in root
    sol = _root_hybr(fun, x0, args=args, jac=jac, **options)
/opt/anaconda3/envs/ogcore-dev/lib/python3.11/site-packages/scipy/optimize/_minpack_py.py:239: in _root_hybr
    retval = _minpack._hybrd(func, x0, args, 1, xtol, maxfev,
ogcore/SS.py:1071: in SS_fsolve
    ) = inner_loop(outer_loop_vars, p, client)
ogcore/SS.py:253: in inner_loop
    results = client.gather(futures)
/opt/anaconda3/envs/ogcore-dev/lib/python3.11/site-packages/distributed/client.py:2372: in gather
    return self.sync(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <Client: 'tcp://127.0.0.1:60025' processes=7 threads=14, memory=64.00 GiB>
futures = [<Future: cancelled, key: root-bdde72ee-71c4-432c-8bc2-db10f79de0d6>, <Future: cancelled, key: root-29617a7c-42c1-407a...: root-114ddf63-307d-4e5c-bad7-c1ec9288d7b6>, <Future: cancelled, key: root-055b8346-ccbd-47e5-bfe3-52a3eeae7987>, ...]
errors = 'raise', direct = False, local_worker = None

    async def _gather(self, futures, errors="raise", direct=None, local_worker=None):
        unpacked, future_set = unpack_remotedata(futures, byte_keys=True)
        mismatched_futures = [f for f in future_set if f.client is not self]
        if mismatched_futures:
            raise ValueError(
                "Cannot gather Futures created by another client. "
                f"These are the {len(mismatched_futures)} (out of {len(futures)}) "
                f"mismatched Futures and their client IDs (this client is {self.id}): "
                f"{ {f: f.client.id for f in mismatched_futures} }"  # noqa: E201, E202
            )
        keys = [future.key for future in future_set]
        bad_data = dict()
        data = {}

        if direct is None:
            direct = self.direct_to_workers
        if direct is None:
            try:
                w = get_worker()
            except Exception:
                direct = False
            else:
                if w.scheduler.address == self.scheduler.address:
                    direct = True

        async def wait(k):
            """Want to stop the All(...) early if we find an error"""
            try:
                st = self.futures[k]
            except KeyError:
                raise AllExit()
            else:
                await st.wait()
            if st.status != "finished" and errors == "raise":
                raise AllExit()

        while True:
            logger.debug("Waiting on futures to clear before gather")

            with suppress(AllExit):
                await distributed.utils.All(
                    [wait(key) for key in keys if key in self.futures],
                    quiet_exceptions=AllExit,
                )

            failed = ("error", "cancelled")

            exceptions = set()
            bad_keys = set()
            for key in keys:
                if key not in self.futures or self.futures[key].status in failed:
                    exceptions.add(key)
                    if errors == "raise":
                        try:
                            st = self.futures[key]
                            exception = st.exception
                            traceback = st.traceback
                        except (KeyError, AttributeError):
                            exc = CancelledError(key)
                        else:
                            raise exception.with_traceback(traceback)
>                       raise exc
E                       concurrent.futures._base.CancelledError: root-055b8346-ccbd-47e5-bfe3-52a3eeae7987

/opt/anaconda3/envs/ogcore-dev/lib/python3.11/site-packages/distributed/client.py:2233: CancelledError
--------------------------------------------------- Captured stdout call ---------------------------------------------------
SS using initial guess factors for r and TR of 1.0 and 1.0 , respectively.
K_d has negative elements. Setting them positive to prevent NAN.
GE loop errors =  [0.20734268432167868, 0.22057035696032112, -0.5360365211086738, 0.0, -0.3575103674688288, -0.12207799142791351, -0.032175933072194594, 0.14829677948467448]
K_d has negative elements. Setting them positive to prevent NAN.
GE loop errors =  [0.21569661089442493, 0.2290374224340711, -0.544966724943811, 0.0, -0.36065530635631726, -0.11688730087221878, -0.03245897757206856, 0.08084575105455971]
GE loop errors =  [0.23385087709279367, 0.24742482274259076, -0.5632247875604337, 0.0, -0.37025999693769995, -0.08643019177885956, -0.033323399724392994, -0.00416660695606727]
GE loop errors =  [0.0221432726852026, 0.031221125432389932, -0.1514402833544044, 0.0, -0.1726761263243315, -0.046255954255976436, -0.01554085136918984, -0.013964954079064562]
GE loop errors =  [0.01273422190568363, 0.021455641434833188, -0.11011422402427318, 0.0, -0.1542471666173627, -0.025763877987646067, -0.013882244995562648, -0.0152482416607342]
GE loop errors =  [0.011832875776148818, 0.020518878497225793, -0.10589977261429695, 0.0, -0.15999107854532535, -0.025882837206731513, -0.014399197069079286, -0.01375919618519747]
GE loop errors =  [0.008329763783064131, 0.016875889565026767, -0.08905375346534372, -1.4901161193847656e-08, -0.1170572735242178, -0.01866973523028917, -0.010535154617179604, -0.02161039678720339]
GE loop errors =  [0.005885628469396775, 0.014332010267271114, -0.07683896082618746, 0.0, -0.1037075822978647, -0.016989515146513062, -0.009333682406807826, -0.023512563700944858]
GE loop errors =  [-0.006319590057689027, 0.0016002948698331382, -0.009333745664947823, 0.0, -0.06085763198526917, -0.0009155104307419226, -0.005477186878674226, -0.02772855706821216]
GE loop errors =  [-0.00631958954930556, 0.0016002954011912623, -0.009333748729581481, 0.0, -0.06085764309485053, -0.0009155088377626741, -0.0054771878785365435, -0.027728557149710592]
GE loop errors =  [-0.006319589687900619, 0.0016002952563328732, -0.00933374789410335, 0.0, -0.0608576318576074, -0.0009155091601193016, -0.005477186867184666, -0.027728559277713616]
GE loop errors =  [-0.03142622438823992, -0.024763326256031726, 0.17771024374530886, 5.4724114129101054e-11, 0.046196789716639985, -0.07134719057723393, 0.004157711074497597, 0.08968507875183984]
GE loop errors =  [-0.03184657181841355, -0.025205288302260064, 0.1815865288957983, -1.9984014443252818e-15, 0.04755722236913784, -0.07177340369955185, 0.004280150013222399, 0.09127306588171728]
GE loop errors =  [-0.03186320952813465, -0.025220986803747347, 0.1817106894193452, -2.220446049250313e-15, 0.047619340255971365, -0.07178701969524026, 0.004285740623037423, 0.09133192267977784]
GE loop errors =  [-0.031426225801070656, -0.024763326803664526, 0.17771024854660356, 5.4724114129101054e-11, 0.04619679115177988, -0.07134718941551926, 0.004157711203660194, 0.08968507874031562]
GE loop errors =  [-0.03142622438823993, -0.02476332715007319, 0.17771024374530953, 5.4724114129101054e-11, 0.04619678971664021, -0.07134719057723396, 0.004157711074497618, 0.08968507875183981]
GE loop errors =  [-0.031426224837715386, -0.024763326730521326, 0.1777102301072555, 5.4724114129101054e-11, 0.046196792201324666, -0.07134718956124014, 0.004157711298119218, 0.08968507850192653]
GE loop errors =  [-0.0314262229041545, -0.024763324689354202, 0.17771026146531832, -1.4846436968696253e-08, 0.046196799053766724, -0.07134719082837102, 0.004157711914839005, 0.08968507616588195]
GE loop errors =  [-0.03142622438823993, -0.02476332625603174, 0.17771024374530953, 5.4724114129101054e-11, 0.04619678971664021, -0.07134719057723396, 0.004157711074497618, 0.08968507875183981]
GE loop errors =  [-0.031426224706567224, -0.02476332659207453, 0.17771024669151725, 5.4724114129101054e-11, 0.04619678951195494, -0.07134719332235642, 0.004157711056075944, 0.08968507895496458]
GE loop errors =  [-0.03142622427372374, -0.02476332613514251, 0.1777102426854298, 5.4724114129101054e-11, 0.04619678044320297, -0.0713471904578846, 0.004157710239888271, 0.08968507879114107]
GE loop errors =  [-0.03142622429590996, -0.02476332615856343, 0.17771024289076953, 5.4724114129101054e-11, 0.04619678972136387, -0.07134719081860308, 0.00415771107492275, 0.08968507822613686]
GE loop errors =  [-0.03419304198672503, -0.029713671631066568, 0.03776954029021051, 0.06079897472446716, 0.0025700827348403843, -0.06251710527220827, 0.00023130744613563542, 0.09139507988130272]
GE loop errors =  [-0.022788443127269097, -0.019920110786017166, 0.026842751647589314, -0.0002900501039420078, 0.038364549614115506, -0.05008669556634865, 0.0034528094652703997, 0.06195436806343571]
GE loop errors =  [-0.010718034808295009, -0.009718962277330934, 0.020619640945690243, 0.005843082039132308, 0.00942417624823555, -0.014753040791136318, 0.0008481758623411981, 0.02974813101637773]
GE loop errors =  [0.00027626463181399524, 0.00025679392371036336, 0.005021272019363643, -7.480949193450215e-11, -0.0011502146987367734, 0.0021770387675895397, -0.00010351932288630433, 0.0007662059252306896]
--------------------------------------------------- Captured stderr call ---------------------------------------------------
2024-04-12 15:49:15,307 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:15,889 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:16,267 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:16,717 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:18,417 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:19,231 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:20,850 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:21,458 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:23,272 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:23,671 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:23,690 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:25,383 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:27,211 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:27,772 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:28,159 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:28,651 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:28,950 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:29,905 - distributed.utils_perf - WARNING - full garbage collections took 12% CPU time recently (threshold: 10%)
2024-04-12 15:49:30,465 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:32,947 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:33,733 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:33,740 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:34,638 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:37,625 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:38,033 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:38,945 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:39,825 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:40,809 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:41,231 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:44,180 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:44,718 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:44,725 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:45,132 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:45,141 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:49,156 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:49,403 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:50,639 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:51,027 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:51,041 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:55,078 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:55,532 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:55,545 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:49:55,551 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:56,949 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:49:58,357 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:50:00,424 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:00,976 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:01,794 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:01,809 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:04,565 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:05,161 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:05,163 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:50:05,540 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:06,026 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:06,044 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:07,361 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:08,226 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2024-04-12 15:50:08,526 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:10,071 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:12,457 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:12,461 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2024-04-12 15:50:12,838 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
---------------------------------------------------- Captured log call -----------------------------------------------------
WARNING  distributed.scheduler:scheduler.py:6547 Key lost during replication: Specifications-91a4313929ddab3dde3be6af2808c1fa
WARNING  distributed.scheduler:scheduler.py:6547 Key lost during replication: Specifications-91a4313929ddab3dde3be6af2808c1fa
INFO     distributed.scheduler:scheduler.py:4475 User asked for computation on lost data, root-bdde72ee-71c4-432c-8bc2-db10f79de0d6
INFO     distributed.scheduler:scheduler.py:4475 User asked for computation on lost data, root-29617a7c-42c1-407a-bd4a-cc450d95a8f6
INFO     distributed.scheduler:scheduler.py:4475 User asked for computation on lost data, root-e7794609-a13b-4a3d-a4d0-efe770737f7f
INFO     distributed.scheduler:scheduler.py:4475 User asked for computation on lost data, root-0ea6461b-ed8a-47ef-a1c5-0ea73f6e210f
INFO     distributed.scheduler:scheduler.py:4475 User asked for computation on lost data, root-114ddf63-307d-4e5c-bad7-c1ec9288d7b6
INFO     distributed.scheduler:scheduler.py:4475 User asked for computation on lost data, root-055b8346-ccbd-47e5-bfe3-52a3eeae7987
INFO     distributed.scheduler:scheduler.py:4475 User asked for computation on lost data, root-618dfd02-4feb-47c1-9ab0-37787c9258cb

And here is the traceback of the two demographics errors.

______________________________________________________ test_get_fert _______________________________________________________

    @pytest.mark.local
    def test_get_fert():
        """
        Test of function to get fertility rates from data
        """
        S = 100
>       fert_rates, fig = demographics.get_fert(S, 0, 99, graph=True)

tests/test_demographics.py:239: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
ogcore/demographics.py:175: in get_fert
    fig = pp.plot_fert_rates(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

fert_rates_list = array([[0.00000e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00,
        0.00000e+00, 0.00000e+00, 0.00000e+00...e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00,
        0.00000e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00]])
labels = 2024, start_year = [2024, 2024], years_to_plot = [2021], include_title = False
source = 'United Nations, World Population Prospects', path = None

    def plot_fert_rates(
        fert_rates_list,
        labels=[""],
        start_year=DEFAULT_START_YEAR,
        years_to_plot=[DEFAULT_START_YEAR],
        include_title=False,
        source="United Nations, World Population Prospects",
        path=None,
    ):
        """
        Plot fertility rates from the data

        Args:
            fert_rates_list (list): list of Numpy arrays of fertility rates
                for each model period and age
            labels (list): list of labels for the legend
            start_year (int): first year of data
            years_to_plot (list): list of years to plot
            include_title (bool): whether to include a title in the plot
            source (str): data source for fertility rates
            path (str): path to save figure to, if None then figure
                is returned

        Returns:
            fig (Matplotlib plot object): plot of fertility rates

        """
        # create line styles to cycle through
        plt.rc("axes", prop_cycle=(cycler("linestyle", [":", "-.", "-", "--"])))
        fig, ax = plt.subplots()
        for y in years_to_plot:
>           i = start_year - y
E           TypeError: unsupported operand type(s) for -: 'list' and 'int'

ogcore/parameter_plots.py:395: TypeError
_____________________________________________________ test_infant_mort _____________________________________________________

    @pytest.mark.local
    def test_infant_mort():
        """
        Test of function to get mortality rates from data
        """
        mort_rates, infmort_rate = demographics.get_mort(100, 0, 99, graph=False)
        # check that infant mortality equals rate hardcoded into
        # demographics.py
>       assert infmort_rate == 0.00491958
E       assert array([0.00477758]) == 0.00491958

tests/test_demographics.py:263: AssertionError

And here is the traceback of the three txfunc errors.

___________________________________________________ test_txfunc_est[DEP] ___________________________________________________

rate_type = 'etr', tax_func_type = 'DEP', numparams = 12
expected_tuple = (array([2.27262567e-23, 6.52115408e-05, 2.58988624e-13, 5.79701075e-09,
       3.39354147e-01, 7.60132613e-01, 9.14365...1, 9.02760087e-06,
       9.02760087e-06, 3.39345119e-03, 7.60123585e-03, 9.02760087e-06]), 237677.14110076256, 152900)
tmpdir = local('/private/var/folders/d4/trj3dssd6s3g8kxvjmczz11w0000gn/T/pytest-of-richardevans/pytest-0/test_txfunc_est_DEP_0')

    @pytest.mark.local  # only marking as local because platform
    # affects results from scipy.opt that is called in this test - so it'll
    # pass if run on Mac with MKL, but not necessarily on other platforms
    @pytest.mark.parametrize(
        "rate_type,tax_func_type,numparams,expected_tuple",
        [
            ("etr", "DEP", 12, expected_tuple_DEP),
            ("etr", "DEP_totalinc", 6, expected_tuple_DEP_totalinc),
            ("etr", "GS", 3, expected_tuple_GS),
        ],
        ids=["DEP", "DEP_totalinc", "GS"],
    )
    def test_txfunc_est(
        rate_type, tax_func_type, numparams, expected_tuple, tmpdir
    ):
        """
        Test txfunc.txfunc_est() function.  The test is that given
        inputs from previous run, the outputs are unchanged.
        """
        micro_data = utils.safe_read_pickle(
            os.path.join(CUR_PATH, "test_io_data", "micro_data_dict_for_tests.pkl")
        )
        s = 80
        t = 2030
        df = txfunc.tax_data_sample(micro_data[str(t)])
        output_dir = tmpdir
        # Put old df variables into new df var names
        df.rename(
            columns={
                "MTR labor income": "mtr_labinc",
                "MTR capital income": "mtr_capinc",
                "Total labor income": "total_labinc",
                "Total capital income": "total_capinc",
                "ETR": "etr",
                "expanded_income": "market_income",
                "Weights": "weight",
            },
            inplace=True,
        )
        test_tuple = txfunc.txfunc_est(
            df, s, t, rate_type, tax_func_type, numparams, output_dir, True
        )

        for i, v in enumerate(expected_tuple):
>           assert np.allclose(test_tuple[i], v)
E           assert False
E            +  where False = <function allclose at 0x10e8feab0>(array([2.27262567e-23, 6.52120583e-05, 2.58990603e-13, 5.79381098e-09,\n       3.37733247e-01, 8.00000000e-01, 9.14366590e-01, 9.02760087e-06,\n       9.02760087e-06, 3.37724219e-03, 7.99990972e-03, 9.02760087e-06]), array([2.27262567e-23, 6.52115408e-05, 2.58988624e-13, 5.79701075e-09,\n       3.39354147e-01, 7.60132613e-01, 9.14365331e-01, 9.02760087e-06,\n       9.02760087e-06, 3.39345119e-03, 7.60123585e-03, 9.02760087e-06]))
E            +    where <function allclose at 0x10e8feab0> = np.allclose

tests/test_txfunc.py:253: AssertionError
____________________________________________________ test_tax_func_loop ____________________________________________________

    @pytest.mark.local
    # mark as local run since results work on Mac, but differ on other
    # platforms
    def test_tax_func_loop():
        """
        Test txfunc.tax_func_loop() function. The test is that given inputs from
        previous run, the outputs are unchanged.
        """
        input_tuple = decompress_pickle(
            os.path.join(
                CUR_PATH, "test_io_data", "tax_func_loop_inputs_large.pbz2"
            )
        )
        (
            t,
            micro_data,
            beg_yr,
            s_min,
            s_max,
            age_specific,
            analytical_mtrs,
            desc_data,
            graph_data,
            graph_est,
            output_dir,
            numparams,
            tpers,
        ) = input_tuple
        tax_func_type = "DEP"
        # Rename and create vars to suit new micro_data var names
        micro_data["total_labinc"] = (
            micro_data["Wage income"] + micro_data["SE income"]
        )
        micro_data["etr"] = (
            micro_data["Total tax liability"] / micro_data["Adjusted total income"]
        )
        micro_data["total_capinc"] = (
            micro_data["Adjusted total income"] - micro_data["total_labinc"]
        )
        # use weighted avg for MTR labor - abs value because
        # SE income may be negative
        micro_data["mtr_labinc"] = micro_data["MTR wage income"] * (
            micro_data["Wage income"]
            / (micro_data["Wage income"].abs() + micro_data["SE income"].abs())
        ) + micro_data["MTR SE income"] * (
            micro_data["SE income"].abs()
            / (micro_data["Wage income"].abs() + micro_data["SE income"].abs())
        )
        micro_data.rename(
            columns={
                "Adjusted total income": "market_income",
                "MTR capital income": "mtr_capinc",
                "Total tax liability": "total_tax_liab",
                "Year": "year",
                "Age": "age",
                "expanded_income": "market_income",
                "Weights": "weight",
            },
            inplace=True,
        )
        micro_data["payroll_tax_liab"] = 0
        test_tuple = txfunc.tax_func_loop(
            t,
            micro_data,
            beg_yr,
            s_min,
            s_max,
            age_specific,
            tax_func_type,
            analytical_mtrs,
            desc_data,
            graph_data,
            graph_est,
            output_dir,
            numparams,
        )

        expected_tuple = utils.safe_read_pickle(
            os.path.join(CUR_PATH, "test_io_data", "tax_func_loop_outputs.pkl")
        )

        for i, v in enumerate(expected_tuple):
            if isinstance(test_tuple[i], list):
                test_tuple_obj = np.array(test_tuple[i])
                exp_tuple_obj = np.array(expected_tuple[i])
                print(
                    "For element",
                    i,
                    ", diff =",
                    np.absolute(test_tuple_obj - exp_tuple_obj).max(),
                )
            else:
                print(
                    "For element",
                    i,
                    ", diff =",
                    np.absolute(test_tuple[i] - v).max(),
                )
>           assert np.allclose(test_tuple[i], v, atol=1e-06)
E           assert False
E            +  where False = <function allclose at 0x10e8feab0>([array([2.55264562e-22, 3.51268116e-05, 3.98196664e-09, 2.22091644e-04,\n       2.94518504e-01, 1.10154861e-03, 1.00000...5296e-01, 1.00000000e+00, 1.01110118e-03,\n       1.01110118e-03, 2.86635079e-03, 1.64414195e-03, 1.01110118e-03]), ...], [array([2.55264562e-22, 3.51268127e-05, 3.88349581e-09, 1.86048713e-04,\n       2.94518501e-01, 1.10154861e-03, 1.00000...0733e-01, 1.00000000e+00, 1.01110118e-03,\n       1.01110118e-03, 2.86635057e-03, 1.67539632e-03, 1.01110118e-03]), ...], atol=1e-06)
E            +    where <function allclose at 0x10e8feab0> = np.allclose

tests/test_txfunc.py:434: AssertionError
--------------------------------------------------- Captured stdout call ---------------------------------------------------
Year= 2025 Age= 21
Year= 2025 Age= 22
Year= 2025 Age= 23
Year= 2025 Age= 24
Year= 2025 Age= 25
Year= 2025 Age= 26
Year= 2025 Age= 27
Year= 2025 Age= 28
Year= 2025 Age= 29
Year= 2025 Age= 30
Year= 2025 Age= 31
Year= 2025 Age= 32
Year= 2025 Age= 33
Year= 2025 Age= 34
Year= 2025 Age= 35
Year= 2025 Age= 36
Year= 2025 Age= 37
Year= 2025 Age= 38
Year= 2025 Age= 39
Year= 2025 Age= 40
Year= 2025 Age= 41
Year= 2025 Age= 42
Year= 2025 Age= 43
Year= 2025 Age= 44
Year= 2025 Age= 45
Year= 2025 Age= 46
Year= 2025 Age= 47
Year= 2025 Age= 48
Year= 2025 Age= 49
Year= 2025 Age= 50
Year= 2025 Age= 51
Year= 2025 Age= 52
Year= 2025 Age= 53
Year= 2025 Age= 54
Year= 2025 Age= 55
Year= 2025 Age= 56
Year= 2025 Age= 57
Year= 2025 Age= 58
Year= 2025 Age= 59
Year= 2025 Age= 60
Year= 2025 Age= 61
Year= 2025 Age= 62
Year= 2025 Age= 63
Year= 2025 Age= 64
Year= 2025 Age= 65
Year= 2025 Age= 66
Year= 2025 Age= 67
Year= 2025 Age= 68
Year= 2025 Age= 69
Year= 2025 Age= 70
Year= 2025 Age= 71
Year= 2025 Age= 72
Year= 2025 Age= 73
Year= 2025 Age= 74
Year= 2025 Age= 75
Year= 2025 Age= 76
Year= 2025 Age= 77
Year= 2025 Age= 78
Year= 2025 Age= 79
Year= 2025 Age= 80
Year= 2025 Age= 81
Insuff. sample size for age 81 in year 2025
Year= 2025 Age= 82
Insuff. sample size for age 82 in year 2025
Year= 2025 Age= 83
Insuff. sample size for age 83 in year 2025
Year= 2025 Age= 84
Insuff. sample size for age 84 in year 2025
Year= 2025 Age= 85
Linearly interpolate previous blank tax functions
Fill in all remaining old age tax functions.
For element 0 , diff = 0.0
For element 1 , diff = 0.0
For element 2 , diff = 0.0
For element 3 , diff = 0.0
For element 4 , diff = 0.0
For element 5 , diff = 0.0
For element 6 , diff = 0.0
For element 7 , diff = 6.6615927808126045
__________________________________________________ test_tax_func_estimate __________________________________________________

tmpdir = local('/private/var/folders/d4/trj3dssd6s3g8kxvjmczz11w0000gn/T/pytest-of-richardevans/pytest-0/test_tax_func_estimate0')
dask_client = <Client: 'tcp://127.0.0.1:61381' processes=2 threads=4, memory=64.00 GiB>

    @pytest.mark.local
    def test_tax_func_estimate(tmpdir, dask_client):
        """
        Test txfunc.tax_func_loop() function.  The test is that given
        inputs from previous run, the outputs are unchanged.
        """
        input_tuple = utils.safe_read_pickle(
            os.path.join(CUR_PATH, "test_io_data", "tax_func_estimate_inputs.pkl")
        )
        micro_data = utils.safe_read_pickle(
            os.path.join(CUR_PATH, "test_io_data", "micro_data_dict_for_tests.pkl")
        )
        (
            BW,
            S,
            starting_age,
            ending_age,
            beg_yr,
            baseline,
            analytical_mtrs,
            age_specific,
            reform,
            data,
            client,
            num_workers,
        ) = input_tuple
        tax_func_type = "DEP"
        age_specific = False
        BW = 1
        test_path = os.path.join(tmpdir, "test_out.pkl")
        test_dict = txfunc.tax_func_estimate(
            micro_data,
            BW,
            S,
            starting_age,
            ending_age,
            start_year=2030,
            baseline=baseline,
            analytical_mtrs=analytical_mtrs,
            tax_func_type=tax_func_type,
            age_specific=age_specific,
            reform=reform,
            data=data,
            client=dask_client,
            num_workers=NUM_WORKERS,
            tax_func_path=test_path,
        )
        expected_dict = utils.safe_read_pickle(
            os.path.join(CUR_PATH, "test_io_data", "tax_func_estimate_outputs.pkl")
        )
        del expected_dict["tfunc_time"]

        for k, v in expected_dict.items():
            if isinstance(v, str):  # for testing tax_func_type object
                assert test_dict[k] == v
            elif isinstance(expected_dict[k], list):
                test_dict_obj = np.array(test_dict[k])
                exp_dict_obj = np.array(expected_dict[k])
                print(
                    "For element",
                    k,
                    ", diff =",
                    np.absolute(test_dict_obj - exp_dict_obj).max(),
                )
            else:  # for testing all other objects
                print(
                    "Max diff for ", k, " = ", np.absolute(test_dict[k] - v).max()
                )
>               assert np.all(np.isclose(test_dict[k], v))
E               assert False
E                +  where False = <function all at 0x10e8e6530>(array([[ True,  True,  True,  True,  True,  True,  True,  True,  True,\n         True,  True,  True,  True,  True,  Tru...rue,  True,  True,  True,  True,  True,  True,  True,\n         True,  True,  True,  True,  True,  True,  True,  True]]))
E                +    where <function all at 0x10e8e6530> = np.all
E                +    and   array([[ True,  True,  True,  True,  True,  True,  True,  True,  True,\n         True,  True,  True,  True,  True,  Tru...rue,  True,  True,  True,  True,  True,  True,  True,\n         True,  True,  True,  True,  True,  True,  True,  True]]) = <function isclose at 0x10e8febb0>(array([[     0.        ,      0.        ,      0.        ,\n             0.        ,      0.        ,      0.        ,\n...     0.        ,\n             0.        ,      0.        ,      0.        ,\n             0.        ,      0.        ]]), array([[     0.        ,      0.        ,      0.        ,\n             0.        ,      0.        ,      0.        ,\n...     0.        ,\n             0.        ,      0.        ,      0.        ,\n             0.        ,      0.        ]]))
E                +      where <function isclose at 0x10e8febb0> = np.isclose

tests/test_txfunc.py:722: AssertionError
---------------------------------------------------- Captured log setup ----------------------------------------------------
INFO     distributed.scheduler:scheduler.py:1711 State start
INFO     distributed.scheduler:scheduler.py:4072   Scheduler at:     tcp://127.0.0.1:61381
INFO     distributed.scheduler:scheduler.py:4087   dashboard at:  http://127.0.0.1:8787/status
INFO     distributed.scheduler:scheduler.py:7872 Registering Worker plugin shuffle
INFO     distributed.nanny:nanny.py:368         Start Nanny at: 'tcp://127.0.0.1:61384'
INFO     distributed.nanny:nanny.py:368         Start Nanny at: 'tcp://127.0.0.1:61386'
INFO     distributed.scheduler:scheduler.py:4424 Register worker <WorkerState 'tcp://127.0.0.1:61388', name: 0, status: init, memory: 0, processing: 0>
INFO     distributed.scheduler:scheduler.py:5932 Starting worker compute stream, tcp://127.0.0.1:61388
INFO     distributed.core:core.py:1019 Starting established connection to tcp://127.0.0.1:61390
INFO     distributed.scheduler:scheduler.py:4424 Register worker <WorkerState 'tcp://127.0.0.1:61391', name: 1, status: init, memory: 0, processing: 0>
INFO     distributed.scheduler:scheduler.py:5932 Starting worker compute stream, tcp://127.0.0.1:61391
INFO     distributed.core:core.py:1019 Starting established connection to tcp://127.0.0.1:61393
INFO     distributed.scheduler:scheduler.py:5689 Receive client connection: Client-5d9430b0-f94d-11ee-aa00-3e0b0b363548
INFO     distributed.core:core.py:1019 Starting established connection to tcp://127.0.0.1:61394
--------------------------------------------------- Captured stdout call ---------------------------------------------------
BW =  1 begin year =  2030 end year =  2030
Finished tax function loop through 1 years and 1 ages per year.
Tax function estimation time: 46.815 sec
For element tfunc_etr_params_S , diff = 0.03986738741392759
For element tfunc_mtrx_params_S , diff = 5.2194314981157675e-09
For element tfunc_mtry_params_S , diff = 0.25037693709098546
Max diff for  tfunc_avginc  =  0.0
Max diff for  tfunc_avg_etr  =  0.0
Max diff for  tfunc_avg_mtrx  =  0.0
Max diff for  tfunc_avg_mtry  =  0.0
Max diff for  tfunc_frac_tax_payroll  =  0.0
Max diff for  tfunc_etr_sumsq  =  0.008424060302786529
Max diff for  tfunc_mtrx_sumsq  =  0.0
Max diff for  tfunc_mtry_sumsq  =  25.278880264493637
rickecon commented 7 months ago

@jdebacker. I re-ran all the tests on my machine locally, and I got 17 errors instead of the previous 20. There were three fewer concurrent.futures errors. But still had the same two errors in test_demographics.py and three errors in test_txfunc.py. It is also interesting to note that I get a different set of concurrent.futures test failures in this run, than I did in my last run. Compare, in particular, the failures in test_SS.py, test_TPI.py, test_run_example.py, and test_run_ogcore.py.

(ogcore-dev) richardevans@Richards-MacBook-Pro-2 OG-Core % pytest
=========================================== test session starts ===========================================
platform darwin -- Python 3.11.8, pytest-8.1.1, pluggy-1.4.0
rootdir: /Users/richardevans/Docs/Economics/OSE/OG-Core
configfile: pytest.ini
testpaths: ./tests
plugins: cov-5.0.0, anyio-4.3.0, xdist-3.5.0
collected 536 items                                                                                       

tests/test_SS.py ...................................                                                [  6%]
tests/test_TPI.py ..................FFFFFF.F.                                                       [ 11%]
tests/test_aggregates.py .....................................                                      [ 18%]
tests/test_basic.py FF..                                                                            [ 19%]
tests/test_demographics.py .....F.F........                                                         [ 22%]
tests/test_elliptical_u_est.py .......                                                              [ 23%]
tests/test_execute.py F                                                                             [ 23%]
tests/test_firm.py .....................................................................            [ 36%]
tests/test_fiscal.py ...................                                                            [ 40%]
tests/test_household.py ..............................................                              [ 48%]
tests/test_output_plots.py ..............................................                           [ 57%]
tests/test_output_tables.py ..............                                                          [ 59%]
tests/test_parameter_plots.py ........................................                              [ 67%]
tests/test_parameter_tables.py .......                                                              [ 68%]
tests/test_parameters.py ..............                                                             [ 71%]
tests/test_run_example.py F.                                                                        [ 71%]
tests/test_run_ogcore.py F                                                                          [ 71%]
tests/test_tax.py ......................................                                            [ 78%]
tests/test_txfunc.py .....F.......F..........F.                                                     [ 83%]
tests/test_user_inputs.py .........                                                                 [ 85%]
tests/test_utils.py ..............................................................................  [100%]
========================================= short test summary info =========================================
FAILED tests/test_TPI.py::test_run_TPI[Baseline] - concurrent.futures._base.CancelledError: root-8b86c093-667b-4bb8-82ea-4e3a22e9d6ea
FAILED tests/test_TPI.py::test_run_TPI[Reform] - concurrent.futures._base.CancelledError: root-eb454283-2afb-453c-98b5-8fa8c4a93064
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, balanced budget] - concurrent.futures._base.CancelledError: root-156f3675-4f08-4f67-8118-ac5e3f1d93c5
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, small open] - concurrent.futures._base.CancelledError: root-6bc14d86-25c8-41ca-862f-2b12f6311397
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, small open for some periods] - concurrent.futures._base.CancelledError: root-a18edfcc-8871-4069-9b2c-ac9b0fb7e2a2
FAILED tests/test_TPI.py::test_run_TPI_extra[Baseline, delta_tau = 0] - concurrent.futures._base.CancelledError: root-80cb87f4-7bbb-4f30-8624-bc218371d1de
FAILED tests/test_TPI.py::test_run_TPI_extra[Reform, baseline spending] - concurrent.futures._base.CancelledError: root-eaadc23f-4e2c-4635-b148-3f7b6ba51596
FAILED tests/test_basic.py::test_run_small[SS] - concurrent.futures._base.CancelledError: root-99ce897a-c277-42f9-ba6d-1bfae4c031f9
FAILED tests/test_basic.py::test_run_small[TPI] - concurrent.futures._base.CancelledError: root-b611f50d-cdfb-48e6-81fd-3789ea6af8dd
FAILED tests/test_demographics.py::test_get_fert - TypeError: unsupported operand type(s) for -: 'list' and 'int'
FAILED tests/test_demographics.py::test_infant_mort - assert array([0.00477758]) == 0.00491958
FAILED tests/test_execute.py::test_runner_baseline_reform - concurrent.futures._base.CancelledError: root-1008377b-f217-42f5-94f8-cc748698ae02
FAILED tests/test_run_example.py::test_run_ogcore_example - assert False
FAILED tests/test_run_ogcore.py::test_run_micro_macro - concurrent.futures._base.CancelledError: root-1e84ff5a-01e1-4604-aee2-e81f5d20f95d
FAILED tests/test_txfunc.py::test_txfunc_est[DEP] - assert False
FAILED tests/test_txfunc.py::test_tax_func_loop - assert False
FAILED tests/test_txfunc.py::test_tax_func_estimate - assert False
====================== 17 failed, 519 passed, 15449 warnings in 25832.94s (7:10:32) =======================
rickecon commented 7 months ago

@jdebacker. I recommend we:

jdebacker commented 7 months ago

@rickecon thank you for these updates! Merging.