Open ThetaS opened 6 months ago
Thanks for the feedback. This is strange, it means that at this point all channels for a spike are cancelling (sum=0) Your radius seems a bit small. Maybe you have a very dense probe but in the border you may have only one or two channel for triangluation. Anyway it is a warning and for some spikes you should have a nan for location.
Hi Samuel @samuelgarcia ,
You predicted the output accurately :
And additionally I also received other warnings besides the one as title:
Here is the probe map as constructed from the manufacturer coordinate packages:
For radius at that time, I was using the setting from the DREDge AP registration tutorial. Our probe is in checkerboard tiling in 325 um for 32 channels (as above figure). So I think it is dense
Another thing to mention is that the loaded binary 'amplifier.dat' file was separated from an Intan recording from the aux channels. The recording unit is in current steps and to convert to uV I have to subtract 32768 and multiply by 0.194 (details in Intan data format guideline). I suspect it might cause a problem for me to bandpass directly on the raw data without first converting it to uV (as in figure 1).
HI @ThetaS , I wonder if you may be able to avoid this warning if you try setting radius_um=50
or even higher in localize_peaks
? I think that 75 or 100um are not uncommon settings in Neuropixels data, for instance.
@ThetaS did you try @cwindolf's test? Has that helped?
@zm711 Hi Zach, sorry for the late reply. We've tried using 100 um and it finished the localization process. However, the register process still failed using the localization output. Within the localization output, we saw 2 NaNs and the number of spikes detected was 600 million across a 7.02 hr recording. Details in spike number I have to confirm with my colleague, but we did see a lot getting detected. My colleague suspected that there is some problem with the detection process as well.
The time to complete localization is 68 hrs.
When we tried to run the detection with a different parameter set, the computer crashed and we didn't have the chance to document what's the last thing on there. Sorry about that.
I think there are 3 things to think about:
Hi Charlie, so we thought the same thing about 600M spikes being way too much. Here's a 10s snip of the preprocessed recording (after filtering & referencing): This is one second for a subset of 10 channels, that can not be possible right?
Here's the noise levels:
We then tried this (increased threshold):
peaks = detect_peaks(
recording_f,
method="locally_exclusive",
detect_threshold=10,
noise_levels=noise_levels_int16,
radius_um=100,
peak_sign='neg',
n_jobs=20,
chunk_memory='10M',
progress_bar=True,
)
and we are still getting 371281245
spikes from:
print(len(peaks['channel_index']))
There's definitely something weird going on with the channels?
Hello,
I was testing out DREDge with my recording but it seems to have some problem during peak localization step. It encounters some divided by zero problem. I checked the probe coordinates and device indices, both are good.
The warning message is:
RuntimeWarning: divide by zero encountered in divide com = np.sum(wf_data[:, np.newaxis] * local_contact_locations, axis=0) / np.sum(wf_data)
My preparation of the recording:
Peak detection:
Localization step (this is where the error comes from):
Please let me know which direction to head towards to solve this. Thanks a lot!