NeuralEnsemble / PyNN

A Python package for simulator-independent specification of neuronal network models.
Other
275 stars 125 forks source link

Replacing source_id by channel_id as multiprocessed simulation are crashing on this line #762

Closed RCagnol closed 4 months ago

RCagnol commented 1 year ago

I noticed that the new version of neo is using channel_id to get the id of neurons of the SpikeTrainList object. Multiprocessed nest simulations are crashing on this line of code, replacing channel_id by source_id there seems to solve this issue.

coveralls commented 1 year ago

Coverage Status

coverage: 71.733% (+0.06%) from 71.675% when pulling e34477ffb87f73d6076b93489679115533887a37 on RCagnol:master into 4ac102bcc13d4640296d151545b5c590a14edc6a on NeuralEnsemble:master.

apdavison commented 1 year ago

channel_id appears nowhere else in the PyNN codebase, whereas source_id appears in several places. This makes me think that a more extensive PR might be needed.

RCagnol commented 1 year ago

Indeed, I didn't identify correctly the source of the problem when creating this fix. In the _get_current_segment() method, when the spike times are contained in a tuple, the SpikeTrainList object is created (here: https://github.com/NeuralEnsemble/PyNN/blob/master/pyNN/recording/__init__.py#L289 ) without any annotations corresponding to the source_id. I'll update the PR.

RCagnol commented 1 year ago

Do you have any insight on why the tests are failing in the github action workflow? This confuses me because all nest tests pass successfully when I ran them locally (Linux, python 3.8.10, nest 3.3), and what causes the tests to fail there is not displayed as the workflow is cancelled before finishing.

apdavison commented 1 year ago

Seems to be that the tests are now using numpy 1.24, which removes numpy.bool

see https://numpy.org/devdocs/release/1.24.0-notes.html#expired-deprecations

=========================== short test summary info ===========================
[1139](https://github.com/NeuralEnsemble/PyNN/actions/runs/3830478130/jobs/6518394611#step:11:1140)
FAILED test/unittests/test_random.py::ParallelTests::test_parallel_safe_with_mask - AttributeError: module 'numpy' has no attribute 'bool'
[1140](https://github.com/NeuralEnsemble/PyNN/actions/runs/3830478130/jobs/6518394611#step:11:1141)
FAILED test/unittests/test_random.py::ParallelTests::test_parallel_unsafe_with_mask - AttributeError: module 'numpy' has no attribute 'bool'
[1141](https://github.com/NeuralEnsemble/PyNN/actions/runs/3830478130/jobs/6518394611#step:11:1142)
==== 2 failed, 470 passed, 209 skipped, 56 warnings in 1632.33s (0:27:12) =====
RCagnol commented 1 year ago

Thanks for the answer. At first I didn't think this that the presence of 'np.bool' was the reason for almost all the Nest tests failing, but I just noticed that Nest 3.3 faces the same problem: https://github.com/nest/nest-simulator/pull/2571 Therefore the tests would still fail (beside the workflow 'Test on windows-latest with Python 3.8' which skips the NEST tests), even after replacing 'np.bool' by 'bool' in PyNN. So probably the solution would be to use numpy 1.23 in the workflow until Nest 3.4 comes out?

apdavison commented 4 months ago

Hi @RCagnol I extended your solution, and made a new PR, #797.