Closed SusanL82 closed 8 months ago
Hi @SusanL82
The max possible clusters is unfortumately not exposed on our side. I could push the change, but as you realized you have to use a previous version of SI to be able to run klusta, so any change would be incompatible.
Are you running Klusta via docker? If not, you could fork SI, checkout the 0.95.1 version and make the change by adding the missing parameter here (https://github.com/SpikeInterface/spikeinterface/blob/main/src/spikeinterface/sorters/external/klusta.py#L40). This would expose the parameter to the run_sorter function.
@alejoe91, did you want to leave this open? @SusanL82, did you want to pursue Alessio's suggestion or have you decided to do something else?
I'll close this, but reopen/open a new issue if you have more questions @SusanL82 :)
Hello,
I finally managed to get my clustering with klustakwik working (yay!), but the whole process seems incredibly slow.
The entire clustering procedure for a single tetrode in a ~3hour recording took over 5 hours (1 hour for spike detection and feature computing). This seems like a lot, because running klustakwik via different software (i.e. klusta.log NLX's SpikeSort3D) takes about half the time with mostly the same settings. One big difference is that we generally set the maximum number of detected clusters to 100 per tetrode (the default is 1000). I think that this could explain the speed difference. Is there some way to adjust klustakwik.initial_parameters max_possible_clusters? It is not available as a parameter in run_sorter().
Susan klusta.log