Closed bxy666666 closed 2 months ago
@bxy666666,
your code mixes a lot of terms and re-imports the same things. You run KS4 but call the variable KS2. You'll also do some WaveformExtractor
steps and some SortingAnalyzer
steps. So it is a bit hard to follow the code you're running. My first piece of advice would be consider a little cleanup. For example you could just do:
import spikeinterface as si # import core only
import spikeinterface.extractors as se
import spikeinterface.preprocessing as spre
import spikeinterface.sorters as ss
import spikeinterface.postprocessing as spost
import spikeinterface.qualitymetrics as sqm
import spikeinterface.comparison as sc
import spikeinterface.exporters as sexp
import spikeinterface.curation as scur
import spikeinterface.widgets as sw
and greatly cleanup your spikeinterface imports. You import the same thing many times. Next I would recommend factoring out the waveform extractor portions. If you are using the sorting_analyzer it will be better to stick with. It has some great improvements under the hood so you'll have an easier time moving forward.
Finally, how are you assessing GPU usage? What type of OS are you using? What CPU and GPU? KS4 will use CPU if you don't have a GPU that it can access so we need to know a bit more about your setup to know how to advise you more.
Also, you can try the functions here to check if torch can see your GPU. If so, I believe KS4 will be using the GPU for sure as it uses torch under the hood.
Note in your script all variable names refer to KS2 but KS4 is being run. For KS2, it requires an NVIDIA GPU and will error out if it is not available, so if it is running you know you are using the GPU.
Ok ,thanks!
I am using docker ,how to use docker to use GPU?
import os os.environ['HDF5_PLUGIN_PATH'] = 'D:\sjwlab\bxy' from pprint import pprint import time import spikeinterface import spikeinterface as si # import core only import spikeinterface.extractors as se import spikeinterface.preprocessing as spre import spikeinterface.sorters as ss import spikeinterface.postprocessing as spost import spikeinterface.qualitymetrics as sqm import spikeinterface.comparison as sc import spikeinterface.core as score import spikeinterface.exporters as sexp import spikeinterface.curation as scur import spikeinterface.widgets as sw import spikeinterface.full as si import matplotlib.pyplot as plt import numpy as np import os import pandas as pd from spikeinterface.widgets.isi_distribution import ISIDistributionWidget from spikeinterface.sortingcomponents.peak_detection import detect_peaks from spikeinterface.core.sortinganalyzer import create_sorting_analyzer from spikeinterface.exporters import export_to_phy import os import shutil import matplotlib.pyplot as plt import numpy as np import pandas as pd from spikeinterface import (extractors as se, preprocessing as spre, sorters as ss, postprocessing as spost, qualitymetrics as sqm, core as score, exporters as sexp, full as si) from spikeinterface.sortingcomponents.peak_detection import detect_peaks from spikeinterface.core.sortinganalyzer import create_sorting_analyzer from spikeinterface.exporters import export_to_phy
global_job_kwargs = dict(n_jobs=32, chunk_duration="1s") si.set_global_job_kwargs(**global_job_kwargs)
def process_data(data_location, data_name="data.raw.h5"):
def delete_all_cache_files(base_cache_folder):
if name == 'main':
处理多个目录