Closed natashakbt closed 1 month ago
Does this recording have uneven trials across tastes?
While running the experiment, I recorded from two extra dig-ins that had uneven trials (0 and 1). But I deleted the board-DIN-##.dat files for those dig-ins before running the pipeline. Is there something else that needs to be deleted? Otherwise, the number of trials are even across the taste dig-ins.
Hmm, my guess is that it's expecting 5 dig-ins in one part of the code but not another...I'll have to have a closer look. Could you please upload the dataset to my folder on the katz drive?
Also, could you please do git log -1
and confirm that you are on this commit number : 122ec17 (this should match with the first 6 characters)
The commit number is correct. Copying over the data now, thank you!
Sorry, I haven't had a chance to look at it yet...will get it to it latest by tomorrow.
If you happen to have spare time on your hands, you can try updating to commit e9a5857, which was the last major update to the emg part of blech_mark_arrays
...it will make some changes to spike-sorting, which you have the option to overriding but we'll just have to update one config file, after which your workflow will be exactly the same.
You'll have to do
git fetch remote master
git merge e9a5857
blech_make_arrays
completes successfully for your data on commit https://github.com/abuzarmahmood/blech_clust/commit/e9a5857800b8d09c1ab5e58bd8fb041fc48aac25.
Making of emg arrays on commit https://github.com/abuzarmahmood/blech_clust/commit/122ec173f66cd0ca7374f59cadaa3984639987d0 has an issue with assigning indices to dig-ins. In your case, the laser dig-in had a smaller number than the taste dig-ins, hence it was assigned an index of 0 but the taste dig-ins were assigned indices of [1,2,3,4]. The emg_data array had a size of 4 in the taste dimension but due to zero-indexing in python, threw as error when it was indexed using the last taste dig-in (4).
Please update according to the above instructions and continue with the emg pipeline.
You will likely have to copy the template params files from blech_clust/params/_templates
to blech_clust/params/
If you don't want to use the classifier + auto-clustering functions, you confirm the followng:
auto_cluster
under auto_params
in sorting_params_template.json
is set to false
use_neuRecommend
and use_classifier
in waveform_classifier_params
is set to false
I am closing this issue, but please reopen if you have any further issues.
I'm having trouble updating to commit e9a5857. I tried running git fetch remote master
while in the blech_clust directory and with the blech_clust conda environment activated, but I get the following:
fatal: 'remote' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
Trying to run git fetch remote master
in the home directory (in the blech_clust environment and the base environment) gives me fatal: not a git repository (or any of the parent directories): .git
Is there another way I should try?
Sorry, try git fetch origin master
The second step should remain the same
That worked, thank you! Now when I run git merge e9a5857
I get:
Updating 122ec17..e9a5857
error: Your local changes to the following files would be overwritten by merge:
blech_clust_pre.sh
Please commit your changes or stash them before you merge.
Aborting
I don't remember making any changes, and I'm sure whatever has been changed would be fine to be replaced. What should I do? Sorry about such basic questions.
Try doing git merge e9a5857 --force
From what I can tell, the blech_clust_pre.sh
in the commit we're updating to shouldn't throw any errors
Running git merge e9a5857 --force
didn't work error: unknown option force
.
Git diff output Output:
diff --git a/blech_clust_pre.sh b/blech_clust_pre.sh
index d3b0908..92dcb05 100644
--- a/blech_clust_pre.sh
+++ b/blech_clust_pre.sh
@@ -4,4 +4,4 @@ python blech_clust.py $DIR &&
echo Running Common Average Reference
python blech_common_avg_reference.py $DIR &&
echo Running Jetstream Bash
-for x in $(seq 10);do bash blech_run_process.sh $DIR;done &&
+for x in $(seq 10);do bash blech_run_process.sh $DIR;done
To fix, ran git checkout blech_clust_pre.sh
to get rid of changes blocking merge
Which then allowed me to successfully run git merge e9a5857
Thank you for all the help!
Encountered an error running blech_make_arrays.py on EMG-only data.
Output message: Processing: /media/natasha/drive2/Natasha_Data/YW8/YW8_test1_4tastes_240508_152905/ Taste dig_ins ::: dig_in_12 dig_in_13 dig_in_14 dig_in_15
Laser dig_in ::: dig_in_11
Calculating cutoff times 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.41s/it] /home/natasha/anaconda3/envs/blech_clust/lib/python3.8/site-packages/pandas/core/generic.py:2434: PerformanceWarning: your performance may suffer as PyTables will pickle object types that it cannot map directly to c-types [inferred_type->mixed,key->block2_values] [items->Index(['breaches_per_sec', 'electrode_type', 'electrode_name'], dtype='object')]
pytables.to_hdf( dig_ins trials_before_cutoff trials_after_cutoff 0 dig_in_11 60 0 1 dig_in_12 30 0 2 dig_in_13 30 0 3 dig_in_14 30 0 4 dig_in_15 30 0 Using durations ::: [2000, 5000] No sorted units found...NOT MAKING SPIKE TRAINS Creating laser info for dig_in_12 Processing: laser from dig_in_11 Creating laser info for dig_in_13 Processing: laser from dig_in_11 Creating laser info for dig_in_14 Processing: laser from dig_in_11 Creating laser info for dig_in_15 Processing: laser from dig_in_11 EMG Data found ==> Making EMG Trial Arrays Traceback (most recent call last): File "blech_make_arrays.py", line 386, in
emg_data[i, j, k, :] = \
IndexError: index 4 is out of bounds for axis 1 with size 4
Closing remaining open files:/media/natasha/drive2/Natasha_Data/YW8/YW8_test1_4tastes_240508_152905/YW8_test1_4tastes_240508_152905.h5...done