Open harshk95 opened 2 years ago
@jenniferColonell any advice on this?
Hi @harshk95 Sorry it took me so long to get around to answering this question! These errors show that the start times in the metadata files are not consistent with consecutive trials -- the negative values for 'rem' indicate that the calculated end time of the concatenated file comes AFTER the end of the file it is trying to add. I'm guessing from the errors that these were actually just independent recordings (that is, not collected as trials) that you need to concatenate. Indeed, supercat, which just concatenates recordings end to end is the correct CatGT command.
@jenniferColonell Hi! I use your modified edition for spikeglx and want to concatenate bin files. But the bin files are not of different trials seprated by triggers. When we recorded, sometimes the spikeglx would crash because of the disk writing problem, so we started recording again(independent recordings). I changed the names of these bin files to be t0~n and set t as 0,n to run the pipline. But CatGT just created a bin file that the size is same to the last recording (xxx_g0_tn.imec0.bin file). Why? Can I use the pipline concatenate them?
Hi @PathwayinGithub The specific problem you are seeing probably has to do with paths. However, for correct concatenation across multiple runs, you'll need to use the supercat feature in CatGT; for multiple streams, make sure you include the -supercat_trim_edges option. I haven't implemented this in the pipeline because it's a less common case, but I can help you with getting writing the appropriate .bat files if that's useful. The basic procedure (see the CatGT Readme for details) is: (1) Run CatGT on your individual runs, to do filtering, artifact removal, and edge finding. You can write a .bat script in windows to process all your runs. (2) Run CatGT with the -supercat feature and -supercat_trim_edges option to concatenate the runs for all your data streams (e.g. imec probes and NI) (3) Run the pipeline using a script based on sglx_filelist_pipeline.py, which skips CatGT and runs sorting + the other modules. (4) Run TPrime with a batch script. There's good instructions in the TPrime readme, but I'm happy to help with that also.
By the way, what kinds of disk writing problems are you having? Are your disks filling up? If you are running multiple probes, you can direct the data streams to different disks to avoid that (this is a feature in SpikeGLX).
Hi, We had an issue with concatenating recordings from different triggers with data acquired with SpikeGLX from a NP1.0 probe. We followed the inline comments in 'sglx_multi_run_pipeline.py' using the fork for SpikeGLX data and had 5 different triggers to concatenate. However, it does not seem that we get the concatenated file from all the runs since the duration is much shorter than expected and we get the following log from catGT.
This is what the folder with the run looks like -
We noticed in the documentation of CatGT there is a mention of supercat and were wondering if this is the command that is run. Thanks!