Open h-mayorquin opened 1 month ago
We will need to review of copy as that API is changing:
https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword
In #3032 I set the minimal version numpy maybe a little bit too high. The reason is that pickled files before that version won't open in numpy 2.0 in the future:
https://numpy.org/doc/stable/numpy_2_0_migration_guide.html#note-about-pickled-files
And I know that @samuelgarcia cares about that.
Other possible improvements once we bump to numpy 2.0:
They say they also make the sorting faster for numpy in 2.0. I am looking forward to test the sorting generation functions which use this feature.
Here is the quantities NumPy 2.0 compatibility PR. https://github.com/python-quantities/python-quantities/pull/232
In #3032 I set the minimal version numpy maybe a little bit too high. The reason is that pickled files before that version won't open in numpy 2.0 in the future:
https://numpy.org/doc/stable/numpy_2_0_migration_guide.html#note-about-pickled-files
And I know that @samuelgarcia cares about that.
Normally we are not saving numpy arrays with object in np.save() so normally the pickled by numpy should not happen. But some zarr stuff could be pickled under the wood we need to check.
@samuelgarcia
Would you rather relax the bound to the introduction of np.ptp
which we need?
Would you rather relax the bound to the introduction of
np.ptp
which we need?
No sure to understand this. np.ptp disappear in numpy 2. ? or was introduced recently ?
The second, it was introduced in 1.20 so that would need to lower bound if we are not concerned about future pickability.
OK I understand now. My intuition would go for numpy1.20. 1.26 looks very very young.
I do not think that we have numpy.save() that do pickle internally. maybe I am wrong. the np.save is mainly for extension and spike vector saving no ? so standard comptact dtype. Am i wrong ?
I also feel that we should for 1.20.
Let me do a brief search to see if we are pickling somewhere as a save. We should relax numpy before the next release.
Let me open a PR and tag it so we don't forget : P
We are not able -and it is not probably a good idea- to support numpy 2.0 as soon as possible.
The ecocsystem will take some time to adapt and we should wait some time until most libraries that use and interact with spikeinterface support numpy 2.0 so we don't create problems for them.
We should though start deprecating functions and removing things that are not supported by 2.0 when that is possible. The PR https://github.com/SpikeInterface/spikeinterface/pull/3032 takes some steps in that direction.
One big roadblock is that some libraries used heavily by this package require lower bounds on their versions for supporting numpy 2.0. Here they are:
numba 0.60.0 (with caveats, see) h5py 3.11 hdmf-zarr 0.7.0 scipy release 1.13 pandas 2.2.2
pynwb and hdmf-zarr don't support numpy 2.0 yet. So we can't install them if we install numpy 2.0
We can keep this an open issue discuss how, when and related issues.