Closed achristensen56 closed 8 months ago
Hi all,
I just wanted to continue this conversation, as I've done some more experiments which basically lead me to more questions than answers.
If I take recordings on two subsequent days and just look at the activity on each channel it looks totally identical. Including the distribution of noise, spikes, etc. I mean really on every channel it looks exactly the same. It could have been the same recording file, in terms of the overall statistics and general properties on each channel. So great, super stable recording.
Except when I use KS2.5 on the concatenated files, it looks even worse sometimes than what I pasted above. Like 60 um shift, and subjectively no alignment on the spikes vs. depth plot. But a 60um shift shouldn't be possible, since I know every single channel is basically perfectly stable. So what gives?? Very confused. We are considering just running this with no registration at all, but that doesnt seem ideal... Has anyone run into a similar circumstance?
@marius10p you mentioned alignment works the best on KS2.0 in another thread? What's the failure mechanism of 2.5 and 3.0? Have you seen this before where qualitatively totally stable recordings are way too shifted by KS2.5? Any suggestions for parameters I should change?
Another update (maybe someday this will be useful to someone!)
running KS2.5 with rigid registration seems to have more or less fixed our alignment issues. Infact, even single day recordings look better with rigid registration.
Hi @achristensen56
Hearing this last point I suspect that this issue is related to something I've noticed as well when working with a dataset with very large (non-rigid) drift (up to 150um). I've meant to open an issue as well but haven't finished double-checking my work.
I notice two things that might cause some issue with datasets with large drift, in particular non-rigid drift. I'm not sure it applies to your case but you might want to have a look anyway:
the maximum possible drift for the whole probe and (resp.) for each block of cahnnels after aligning the whole probe to the template are hardcoded in align_block2 ( here and here ).
In align_block2, at each step of the main loop of rigid alignment, the template to which each batch's fingerprint is aligned is obtained by iteratively averaging the rigidly aligned fingerprints. There's then a final step which (non-rigidly) aligns each block*batch's fingerprint to the (rigid) template. Because the non-rigid alignment is not reflected in the template, the target fingerprint looks pretty bad in regions exhibiting large amplitude non-rigid (local) drift, which impairs the alignment of each block. I've "fixed" the algorithm by adding a loop of non-rigid alignment during which the template is updated. The corresponding commit is here, feel free to give it a shot, I'm curious whether it helps: https://github.com/TomBugnon/Kilosort/commit/70dbf1960203d8b812445a603ada8ed9d6ef1634 I'm not done testing it though Here are the templates returned by align_block with the default vs modified algorithm (better algnment on the bottom)
Another possibility is: https://github.com/evarol/NeuropixelsRegistration , which I suppose/hope should be integrated with kilosort eventually
Cheers
Hi @TomBugnon thanks for the comment! I'm so sorry I missed this response. I will look into trying out your version of the registration or the neuropixels registration code you linked! Thanks so much . Amy
Update @TomBugnon I tried your non-rigid target code, unfortunately it doesn't fix my issues. Even when the drift map allignment looks good by eye KS often (basically always!) splits cells into day1 and day2. I can merge them in phy but that's not ideal.
@achristensen56 Sorry to hear. Would you care to share the drift map (or ideally some zoom of it) with both versions of the algorithm?
Hi All!
Similar to other discussions on tracking, I was wondering if we could have a discussion about the amount of drift and noise that's expected from day to day, and what amount is indicative of a probe with too much drift to successfully do day to day alignment.
I've read the NP2.0 paper and looked at those examples, but like all papers I assume they are rather best case scenarios. Would people be willing to share some examples of "good" day to day alignment drift maps and neural features, and "bad" features, and qualitatively how you make those decisions? We can of course use the quantitative metrics in the KS/NP paper, but we'd like also to get a qualitative feel for this type of data, and what quality we should be aiming for based on community wisdom.
I'm attaching the KS output from a day-to-day alignment we recently attempted. These sessions were recorded on consecutive days, with a chronic probe in an awake behaving rat. For stupid technical failures of our recording setup, we also had to align many individual sessions in each day -- but as you can see the large shift in the middle is "day to day" and is much more poorly registered than the almost imperceptible shift in the different sub sessions on the same day. All told the data below is 5 separate sessions aligned, across two different days.
Here are some screenshots from Phy of the resulting sorting output. I've attached a neurosn that seems 'good', and one that seems questionable. In all cases there are basically zero ISI violations, amplitude is > 100, and min fr > .1 Good: Questionable:
Very interested in any insight the community might have, in particular if you have suggestions for how to improve our day to day alignment!