Closed SusanL82 closed 4 months ago
Hey @SusanL82 it looks like you had reported this previously #1878. Could you open the issue over on Neo. These types of overflow are operating system dependent and often are due to the fact that a numpy rather than python scalar is being used. Since this needs to be fixed on the Neo side opening an issue there will allow us to fix that in the actual package and then propagate it here. Let us know if you have questions!
O my gosh, you're totally right! I'm now not sure how I made the error go away last time... maybe just by running on a different PC... I'm also not sure why I didn't find my own previous issue. I'll post it for NEO.
It could also be size of the dataset. Overflow doesn't happen until it does and with datasets getting bigger and bigger maybe you're just tipping over the limits more often now.
@alejoe91 /@samuelgarcia , you actually want to take a look at this one. I'm not seeing how we can get a scalar overflow. The only input into axonarawio that could do this is i_start
, but in this case the i_start
provided would have to be from a call to get_traces
inside of detect_peaks
which might be a problem with the node_pipeline? Wanna confirm that?
Hi. Thank you Susan for report. And thank you Zach for the link with nodepipeline, maybe yes. I need to check. Keep this open.
Could you test https://github.com/SpikeInterface/spikeinterface/pull/2854?
I think that should fix your issue.
In case it still helps, I've shared my data here: https://owncloud.cesnet.cz/index.php/s/u35wSdcL6qCU4Ie
(also: https://github.com/NeuralEnsemble/python-neo/issues/1475#issuecomment-2120785266)
I've tried to download so we could test, but I ended up getting an invalid zip error. Probably some corrupted communication issue between the cloud storage and my computer, but I haven't been able to test this yet. Not sure if you wanted to try to pull this down and test yourself @h-mayorquin or wait until @SusanL82 can test it?
OwnCloud has been a bit weird about one of the bin files. I thought I'd fixed it, but maybe not. Does it work from here instead? dropbox share link (I've just uploaded the files when I post this, it'll take a little while to sync)
Howdy @SusanL82,
I'm a little busy for the next couple days. But I'm happy to try working on this as soon as I can!
Hi everyone,
I'm encountering the following error/warning in a spike-extraction script I made a while ago: I am using the detect_peaks function on a concatenated Axona recording, which has worked fine until recently. I've attached the script below. It's a bit long and not very elegant, but the error happens in the 'detect peaks' section, line 95-99. The main goal of the script is to detect spikes and export them so that we can use them in a manual clustering program (SpikeSort3D), I usually run it in Spyder.
For some reason, I now encounter the following:
I have updated to Python 3.10 and installed the latest version of spikeinterface (within the latest anaconda) and the warning/error persists. What can I do about it?