Closed brendonw1 closed 4 years ago
Where does yours break, at which stage of processing? Post the command line output please.
It's been months since we tried it and only looping back now... I'll try to get the error and re-send it. But I think I went through it once here before and we thought it was GPU ram.
Can I quickly ask what you think of the NVIDIA Quadro P6000?
It's not worth it. Depending what your problem is, might help you sort 50% or 100% more data but the speed will be the same as standard top of the line GTX/RTX card.
I am working on your option 2 though, and should be ready soon.
That would be amazing to work on option 2. I know a few labs that'd love that! Thanks so much
On Thu, Nov 14, 2019 at 7:10 AM Marius Pachitariu notifications@github.com wrote:
It's not worth it. Depending what your problem is, might help you sort 50% or 100% more data but the speed will be the same as standard top of the line GTX/RTX card.
I am working on your option 2 though, and should be ready soon.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/MouseLand/Kilosort2/issues/135?email_source=notifications&email_token=AA26WTM2MEFEH2U7VZC56Y3QTU55DA5CNFSM4JLGUSNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEBT27Y#issuecomment-553860479, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA26WTJ4ZGTCGEGVXZ3KO63QTU55DANCNFSM4JLGUSNA .
I have added a function called runTemplates which takes as input previously found templates. This is also used inside the main optimization script "learnAndSolve8b", which first calls a function to learn the templates, and then calls this function to extract the spikes. If you give it templates, it will skip the learning step, while keeping everything else the same. The use scenario would thus be that you just run the master_kilosort script as usual, you just intervene before learnAndSolve8b to add a set of templates to the context structure "rez". More instructions for how to do this are in the comments of "runTemplates".
You will probably encounter some issues in trying to interpret results. For example, if you run any merges and splits afterwards, it might still change the clusters, making it hard to map to a previous session. You also probably want to use the template "state" from the very end of the previous session, which you can find in rez.WA, rez.UA, rez.muA, and "daisy-chain" the extraction like this for multiple sessions where you keep replacing the templates with those found at the very end of the previous session.
For your specific purpose, I would probably advocate a different approach. You could split your data into N overlapping blocks, say each block overlapping 50% with past and future blocks. Then spike sort each block separately, and match clusters from consecutive blocks based on how often their spikes coincide in the overlap segments. You should only be able to get strong matches for same neurons. This has the advantage that it processes data in a more temporally uniform way, so that clusters can come and go and don't necessarily need to be tracked all the way from the beginning to the end.
This sounds perfect! We have a bunch of deadlines upcoming but I anticipate we can get ot trying it in December. Thanks Marius! (and I hope this will help KS2 in general too). Really appreciate your being so responsive - you're a busy guy.
On Mon, Nov 25, 2019 at 2:26 PM Marius Pachitariu notifications@github.com wrote:
I have added a function called runTemplates which takes as input previously found templates. This is also used inside the main optimization script "learnAndSolve8b", which first calls a function to learn the templates, and then calls this function to extract the spikes. If you give it templates, it will skip the learning step, while keeping everything else the same. The use scenario would thus be that you just run the master_kilosort script as usual, you just intervene before learnAndSolve8b to add a set of templates to the context structure "rez". More instructions for how to do this are in the comments of "runTemplates".
You will probably encounter some issues in trying to interpret results. For example, if you run any merges and splits afterwards, it might still change the clusters, making it hard to map to a previous session. You also probably want to use the template "state" from the very end of the previous session, which you can find in rez.WA, rez.UA, rez.muA, and "daisy-chain" the extraction like this for multiple sessions where you keep replacing the templates with those found at the very end of the previous session.
For your specific purpose, I would probably advocate a different approach. You could split your data into N overlapping blocks, say each block overlapping 50% with past and future blocks. Then spike sort each block separately, and match clusters from consecutive blocks based on how often their spikes coincide in the overlap segments. You should only be able to get strong matches for same neurons. This has the advantage that it processes data in a more temporally uniform way, so that clusters can come and go and don't necessarily need to be tracked all the way from the beginning to the end.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/MouseLand/Kilosort2/issues/135?email_source=notifications&email_token=AA26WTNDKB4AFZUL4WGJTKDQVQRFNA5CNFSM4JLGUSNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFDQUOI#issuecomment-558303801, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA26WTLBJAFD36T7TWBKJC3QVQRFNANCNFSM4JLGUSNA .
Closing for lack of activity. Also, these kinds of issues should be fixed (I hope) in the imminent new release which uses a different approach for correcting drift.
Thanks Marius
On Wed, Oct 14, 2020 at 10:02 AM Marius Pachitariu notifications@github.com wrote:
Closed #135 https://github.com/MouseLand/Kilosort2/issues/135.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/MouseLand/Kilosort2/issues/135#event-3876761079, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA26WTOCVIGOAGGZJCVA7X3SKWVODANCNFSM4JLGUSNA .
Hello, I am coming back to this issue as we now have some very long-duration recordings. The last thing written was that KS2.5 or KS3 might be better able to 1) extract from one set of data and 2)find spikes in a different set of data. Is that right? Can anyone guide me as to how to do that with the more recent Kilosort versions?
We have the same out of memory issues as others with long recordings with Kilosort2. This is from a single shank probe (64ch) and so we cannot break the channels up. I have some questions to try to deal with it and I"m hoping for some answers:
We can try to throw money at the problem - we might get a 24GB GPU. Is it correct that the NVIDIA Quadro P6000 (https://www.amazon.com/NVIDIA-Quadro-P6000-Graphics-DisplayPort/dp/B01M0S2FKR) is better than the 24GB Titan or Tesla architectures with 24GB?
Is there a way to derive templates from one portion of the data and then apply those templates for template finding on the whole data? (say we feed it 1/5th of the data for template finding and then apply templates to the full recording later)? ... the idea here being maybe the matching part is less ram-dependent than the template finding part?)
Thanks very much Brendon Watson, Univ of Michigan Thanks so much