TopEFT / topeft

15 stars 24 forks source link

Pin numpy to 1.23.5 to fix a conflict between coffea and numpy #405

Closed anpicci closed 7 months ago

anpicci commented 7 months ago

While setting the local environment to include correction_lib to top_eft, I have obtained this when running the processor with futures:

Traceback (most recent call last): File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 786, in _processwith merged = _watcher(FH, self, reducer, pool) File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 402, in _watcher batch = FH.fetch(len(FH.completed)) File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 286, in fetch raise bad_futures[0].exception() TypeError: __class__ assignment: 'NanoEventsArray' object layout differs from 'Array' Traceback (most recent call last): File "/afs/crc.nd.edu/user/a/apiccine/correction-lib/topeft/analysis/topeft_run2/run_analysis.py", line 330, in <module> output = runner(flist, treename, processor_instance) File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 1700, in __call__ wrapped_out = self.run(fileset, processor_instance, treename) File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 1848, in run wrapped_out, e = executor(chunks, closure, None) File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 817, in __call__ return _processwith(pool=poolinstance, mergepool=mergepoolinstance) File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 801, in _processwith raise e from None File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 786, in _processwith merged = _watcher(FH, self, reducer, pool) File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 402, in _watcher batch = FH.fetch(len(FH.completed)) File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/coffea/processor/executor.py", line 286, in fetch raise bad_futures[0].exception() TypeError: __class__ assignment: 'NanoEventsArray' object layout differs from 'Array'

To fix this, I downgraded numpy from 1.26.4 to 1.23.5. As a consequence, it would be good to pin numpy to this version in environment.yml.

codecov[bot] commented 7 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 27.17%. Comparing base (df6ef86) to head (5441170).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #405 +/- ## ======================================= Coverage 27.17% 27.17% ======================================= Files 28 28 Lines 4224 4224 ======================================= Hits 1148 1148 Misses 3076 3076 ``` | [Flag](https://app.codecov.io/gh/TopEFT/topeft/pull/405/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=TopEFT) | Coverage Δ | | |---|---|---| | [unittests](https://app.codecov.io/gh/TopEFT/topeft/pull/405/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=TopEFT) | `27.17% <ø> (ø)` | | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=TopEFT#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

anpicci commented 7 months ago

@bryates thank you for spotting that issue, it should be fixed now