Open samReiter opened 8 years ago
Hi Sam,
There was an actual decision to limit ops.nt0 to a sensible number, because it is used to allocate shared memory on the GPU, which cannot be allocated dynamically. I can in principle increase this upper limit (I think it's ~80 right now), but are you sure you need such long spike windows?
Hi Marius,
Oh no, I'm good with staying underneath 80, just thought I would let you know in case it was a sign of some more serious problem. Thank you for your work on SSorting, it's great.
All the best, Sam
Great, thanks, just let me know with any other bug you run into. I will add a note and an error in Matlab saying nt0 cannot go above 80.
Thank you very much for adding this. If I increase the value from 61 I run into errors with the hardcoded values for removing peaks in isolated_peaks.m and for setting the spike peak time in get_PCproj.m when I initialize from data. What do you think about having these values adjust with obj.nt0? I also run into an error with CUDA if I try to make the samples per spike over a certain size. I tried to set ops.nt0 = 97, dt in get_PCproj.m to 41, and in isolated_peaks.m I made line 10 peaks([1:40 end-80:end], :) = 0;
No error if I keep ops.nt0 in the 70s.