Closed natashakbt closed 4 months ago
Also, this is on data which I had partially analyzed but reset using python blech_clean_slate.py
. Not sure if that could contribute to the error.
It seems like your code is trying to use neurecommend functionality but can't find it. From your error snippet below:
feature_names = json.load(open(feature_names_path, 'r'))
FileNotFoundError: [Errno 2] No such file or directory: '/home/natasha/Desktop/neuRecommend/model/feature_names.json'
Traceback (most recent call last):
Under blech_clust/params/waveform_classifier _params, could you make sure that the first 3 options are set to false like below
{
"use_neuRecommend":false,
"use_classifier":false,
"throw_out_noise":false,
"min_suggestion_count" : 2000
}
Then delete results.log from the data dir and try running blech_run_process again.
If you'd like to use the classifier, that may require more setup
That fixed it. But got following message after rerunning and no output was generated
local:4/26/100%/3.5s Using blech_spike_features
=== Performing manual clustering ===
======================================
Classifier output not found, please run blech_run_process.sh with classifier.
======================================
That seems to be an unresolved issue in blech_clust/utils/blech_process_utils.py
check_classifier_data_exists
exits if classifier data is not found, and the function is run when initializing cluster_handler
. This should not be happening. You can remove the call to check_classifier_data_exists
in the initialization and that should work
class cluster_handler():
"""
Class to handle clustering steps
"""
def __init__(self, params_dict,
data_dir, electrode_num, cluster_num,
spike_set, fit_type = 'manual'):
assert fit_type in ['manual', 'auto'], 'fit_type must be manual or auto'
self.check_classifier_data_exists(data_dir)
def check_classifier_data_exists(self, data_dir):
clf_list = glob(os.path.join(
data_dir,
'spike_waveforms/electrode*/clf_prob.npy'))
if len(clf_list) == 0:
print()
print('======================================')
print('Classifier output not found, please run blech_run_process.sh with classifier.')
print('======================================')
print()
exit()
That resolved the issue, thank you!!
I was encountering the same issues I tried commenting out the same line self.check_classifier_data_exists(data_dir)
( line 48 of blech_clust/utils/blech_process_utils.py
) and then no plots seemed to appear. Here is the message I got
bash blech_run_process.sh $DIR
Processing /media/thomas/Data/BackupData/TG37_active_RetEB_OrthoEB_Day4_240229_144222
Retry 1
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 2
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 3
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 4
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 5
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 6
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 7
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 8
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 9
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Retry 10
Computers / CPU cores / Max jobs to run
1:local / 32 / 20
0
Any advice?
Hmm, could you run python blech_process.py $DIR 0
and tell me what you get?
blech_process.py
is the base script that processes the electrode so it should tell us what is happenning at the single electrode level. The error output is pretty much lost at the blech_run_process.py
level.
On commit e9a5857
Ran into an issue running
bash blech_run_process.sh
. Snippet of error message for electrode 5 below, but repeats the same message on about half of the electrodes. No cluster plots generated for any electrodes.