SpikeInterface / spikeinterface

A Python-based module for creating flexible and robust spike sorting pipelines.
https://spikeinterface.readthedocs.io
MIT License
517 stars 186 forks source link

SpikeGLXRecordingExtractor fails when gate > 0 #628

Closed TomBugnon closed 2 years ago

TomBugnon commented 2 years ago

Hi @samuelgarcia and all, Initializing a spikeinterface.extractors.SpikeGLXRecordingExtractor object from a probe directory with gate index 0 works properly:

$ ls subject_dir/run_name_g0/run_name_g0_imec0
1-6-2022_g0_t0.imec0.lf.bin   1-6-2022_g0_t1.imec0.lf.bin   1-6-2022_g0_t2.imec0.lf.bin
1-6-2022_g0_t0.imec0.lf.meta  1-6-2022_g0_t1.imec0.lf.meta  1-6-2022_g0_t2.imec0.lf.meta
se.SpikeGLXRecordingExtractor("subject_dir/run_name_g0/run_name_g0_imec0", "imec0.lf") # Assuming folder-per-probe organisation

SpikeGLXRecordingExtractor: 384 channels - 1 segments - 2.5kHz - 7200.001s

... But it fails when the gate index is >0

$ ls subject_dir/run_name_g1/run_name_g1_imec0
1-6-2022_g1_t0.imec0.lf.bin   1-6-2022_g1_t2.imec0.lf.meta  1-6-2022_g1_t5.imec0.lf.bin   1-6-2022_g1_t7.imec0.lf.meta
1-6-2022_g1_t0.imec0.lf.meta  1-6-2022_g1_t3.imec0.lf.bin   1-6-2022_g1_t5.imec0.lf.meta  1-6-2022_g1_t8.imec0.lf.bin
1-6-2022_g1_t1.imec0.lf.bin   1-6-2022_g1_t3.imec0.lf.meta  1-6-2022_g1_t6.imec0.lf.bin   1-6-2022_g1_t8.imec0.lf.meta
1-6-2022_g1_t1.imec0.lf.meta  1-6-2022_g1_t4.imec0.lf.bin   1-6-2022_g1_t6.imec0.lf.meta  1-6-2022_g1_t9.imec0.lf.bin
1-6-2022_g1_t2.imec0.lf.bin   1-6-2022_g1_t4.imec0.lf.meta  1-6-2022_g1_t7.imec0.lf.bin   1-6-2022_g1_t9.imec0.lf.meta
se.SpikeGLXRecordingExtractor("subject_dir/run_name_g1/run_name_g1_imec0", "imec0.lf")
KeyError                                  Traceback (most recent call last)
Input In [47], in <cell line: 1>()
----> 1 se.SpikeGLXRecordingExtractor('.', 'imec0.lf')

File ~/projects/spikeinterface/spikeinterface/extractors/neoextractors/spikeglx.py:38, in SpikeGLXRecordingExtractor.__init__(self, folder_path, stream_id)
     36 if HAS_NEO_10_2:
     37     neo_kwargs['load_sync_channel'] = False
---> 38 NeoBaseRecordingExtractor.__init__(self, stream_id=stream_id, **neo_kwargs)
     40 #~ # open the corresponding stream probe
     41 if HAS_NEO_10_2 and '.ap' in self.stream_id:

File ~/projects/spikeinterface/spikeinterface/extractors/neoextractors/neobaseextractor.py:26, in NeoBaseRecordingExtractor.__init__(self, stream_id, **neo_kwargs)
     24 def __init__(self, stream_id=None, **neo_kwargs):
---> 26     _NeoBaseExtractor.__init__(self, **neo_kwargs)
     28     # check channel
     29     # TODO propose a meachanisim to select the appropriate channel groups
     30     # in neo one channel group have the same dtype/sampling_rate/group_id
     31     # ~ channel_indexes_list = self.neo_reader.get_group_signal_channel_indexes()
     32     stream_channels = self.neo_reader.header['signal_streams']

File ~/projects/spikeinterface/spikeinterface/extractors/neoextractors/neobaseextractor.py:16, in _NeoBaseExtractor.__init__(self, **neo_kwargs)
     14 neoIOclass = eval('neo.rawio.' + self.NeoRawIOClass)
     15 self.neo_reader = neoIOclass(**neo_kwargs)
---> 16 self.neo_reader.parse_header()
     18 assert self.neo_reader.block_count() == 1, \
     19     'This file is neo multi block spikeinterface support one block only dataset'

File ~/miniconda3/envs/rebooting/lib/python3.10/site-packages/neo/rawio/baserawio.py:185, in BaseRawIO.parse_header(self)
    172 def parse_header(self):
    173     """
    174     This must parse the file header to get all stuff for fast use later on.
    175
   (...)
    183
    184     """
--> 185     self._parse_header()
    186     self._check_stream_signal_channel_characteristics()

File ~/miniconda3/envs/rebooting/lib/python3.10/site-packages/neo/rawio/spikeglxrawio.py:106, in SpikeGLXRawIO._parse_header(self)
    103 signal_channels = []
    104 for stream_name in stream_names:
    105     # take first segment
--> 106     info = self.signals_info_dict[0, stream_name]
    108     stream_id = stream_name
    109     stream_index = stream_names.index(info['stream_name'])

KeyError: (0, 'imec0.lf')

Same issue when instantiating directly the SpikeGLXRawIO object from neo. It seems like the gate (and not only the stream) should be either passed as a kwarg or parsed from the folder name?

Thanks!

Spikeinterface version = 0.94.1.dev0 neo version = 0.12.2

grahamfindlay commented 2 years ago

If you want to parse the filenames themselves for trigger/gate/run info, here's a snippet I wrote for that and often use (source):

import re

def parse_sglx_fname(fname):
    """Parse recording identifiers from a SpikeGLX style filename.
    Parameters
    ---------
    fname: str
        The filename to parse, e.g. "my-run-name_g0_t1.imec2.lf.bin"
    Returns
    -------
    run: str
        The run name, e.g. "my-run-name".
    gate: str
        The gate identifier, e.g. "g0".
    trigger: str
        The trigger identifier, e.g. "t1".
    probe: str
        The probe identifier, e.g. "imec2"
    stream: str
        The data type identifier, "lf" or "ap"
    ftype: str
        The file type identifier, "bin" or "meta"
    Examples
    --------
    >>> parse_sglx_fname('3-1-2021_A_g1_t0.imec0.lf.meta')
    ('3-1-2021_A', 'g1', 't0', 'imec0', 'lf', 'meta')
    """
    x = re.search(
        r"_g\d+_t\d+\.imec\d+.(ap|lf).(bin|meta)\Z", fname
    )  # \Z forces match at string end.
    run = fname[: x.span()[0]]  # The run name is everything before the match
    gate = re.search(r"g\d+", x.group()).group()
    trigger = re.search(r"t\d+", x.group()).group()
    probe = re.search(r"imec\d+", x.group()).group()
    stream = re.search(r"(ap|lf)", x.group()).group()
    ftype = re.search(r"(bin|meta)", x.group()).group()

    return (run, gate, trigger, probe, stream, ftype)
alejoe91 commented 2 years ago

Thanks for the snippet @grahamfindlay

Anyways, I think that NEO should allow to parse gates + streams.. @samuelgarcia thoughts?

samuelgarcia commented 2 years ago

Hi. I am not super familiar with this "gate" concept. In my mind, it was equivalent to a neo "segment". See this https://github.com/NeuralEnsemble/python-neo/blob/master/neo/rawio/spikeglxrawio.py#L291 In short the another piece of the same recording.

But seing your dataset, it appears that I was totally wrong! In fact, I never get such a dataset in my hand. I think the t0 relate more the segment concept no ?

The g0 or gt1 relate to something. What does a gate represent ? Could you send me this dataset with super shorten .bin file ? So I could ptahc on neo side ?

samuelgarcia commented 2 years ago

@TomBugnon : I was not ware of you fork here https://github.com/CSC-UW/spikeinterface. One day we could have a call to discuss how to merge your change and wish into the main repo.

TomBugnon commented 2 years ago

Hi @samuelgarcia, Bill Karsh explains the concept of gates and triggers here.

If I got that properly, within a single "run" the gate can be opened or closed to stop and restart acquisition, saving files in a different directory each time, and the triggers increments are splitting the ongoing recording (for a given gate) into multiple files. I am not certain whether or not files are necessarily contiguous across triggers (which is the definition of spikeinterface segments?), but they definitely are not necessarily contiguous across gates.

The output directory structure is one of the following depending on the "folder-per-probe" parameter:

# folder-per-probe == True
- run_dir/ (example: 3-1-2021/)
  - gate_dir_g0/ (example: 3-1-2021_g0/)
  ...
  - gate_dir_gN/
    - probe_dir_imec0/ (example: 3-1-2021_g0_imec0/)
    ...
    - probe_dir_imecN/
      - trigger_file_gN_imecN_t0.lf.bin
      - trigger_file_gN_imecN_t0.lf.meta
      - trigger_file_gN_imecN_t0.ap.bin
      - trigger_file_gN_imecN_t0.ap.meta
      ...
      - trigger_file_gN_imecN_tN.lf.bin
      - trigger_file_gN_imecN_tN.lf.meta
      - trigger_file_gN_imecN_tN.ap.bin
      - trigger_file_gN_imecN_tN.ap.meta
#  folder-per-probe == False
- run_dir/ (example: 3-1-2021/)
  - gate_dir_g0/ (example: 3-1-2021_g0/)
  ...
  - gate_dir_gN/
      - trigger_file_gN_imec0_t0.lf.bin
      - trigger_file_gN_imec0_t0.lf.meta
      - trigger_file_gN_imec0_t0.ap.bin
      - trigger_file_gN_imec0_t0.ap.meta
      ...
      - trigger_file_gN_imec0_tN.lf.bin
      - trigger_file_gN_imec0_tN.lf.meta
      - trigger_file_gN_imec0_tN.ap.bin
      - trigger_file_gN_imec0_tN.ap.meta
      ...
      - trigger_file_gN_imecN_t0.lf.bin
      - trigger_file_gN_imecN_t0.lf.meta
      - trigger_file_gN_imecN_t0.ap.bin
      - trigger_file_gN_imecN_t0.ap.meta
      ...
      - trigger_file_gN_imecN_tN.lf.bin
      - trigger_file_gN_imecN_tN.lf.meta
      - trigger_file_gN_imecN_tN.ap.bin
      - trigger_file_gN_imecN_tN.ap.meta

One other thing is, many people use the CatGT utility provided with SpikeGLX to preprocess and concatenate SGLX files across triggers and files. The output of CatGT preprocessing is still SGLX-like, so it would be convenient if it was supported. Output has same structure as the raw data (with or without folder-per-probe structure) except for the following:

I'll try and find short real-like files to send you, and ping Bill Karsh so he double checks I'm not telling you nonsense

alejoe91 commented 2 years ago

@TomBugnon do you know if the meta file can change across triggers/gates? In order to have a multi-segment object, we need channels and locations to be the same (no reconfiguration of the probe)

TomBugnon commented 2 years ago

(@alejoe91 I just updated my comment)

TomBugnon commented 2 years ago

Here's the gate tab help

And here the trigger help -> This suggests that trigger files are not necessarily contiguous

This suggests that recorded channels can vary across gates within a run (but probably not across triggers? I can't say Bill will need to confirm..). However CatGT definitely allows concatenating files across gates. I don't know whether it checks that the channels recorded haven't changed when it does so (I assume it ought to)

samuelgarcia commented 2 years ago

OK I see.

If I understand correctly, trigger should be used as neo segment. They are in the same folder.

diffrents gate are in diffrents folder so this can be ignored are only porpagated to annotation. So changes are minimal on neo side and also on SI side.

Normally CatGT will not be anymore necessary with SI because it will read at once an entire folder of multiple trigger. It will give a Recording with sevral segment. You will be able to use append_segment(...) in SI to virtually concatenate several folder with several gate index manually in Si also and in that case the function will check the channel number.

TomBugnon commented 2 years ago

That sounds good to me. About CatGT I was more hinting that it would be good that SGLX extractors also support catGT-processed files (even though concatenation could be performed via SI), because CatGT performs non trivial preprocessing and is widely used. For instance in our pipeline we first preprocess contiguous files with CatGT, and then use SI to (possibly concatenate non-contiguous CatGT-processed files and) run the sorter. Otherwise some other catgt-processed specific extractor would be needed

Also I think the SpikeGLXRawIO object and the SGLXRecordingExtractor should allow restricting the extractor to a range of triggers and not necessarily use all the trigger files in the directory

billkarsh commented 2 years ago

Once SpikeGLX acquisition is started most functional parameters can not be changed, so should be the same across multiple g and t indices. Things that are changeable in metadata include additional user annotations and file {length, SHA, timestamp}. There are several mini programs that determine how a sequence of files is written, these are called 'triggers' and produce files with advancing t index. Depending on the trigger parameters, the t-files may or may not overlap. A given trigger program (t-series) can be 'run' multiple times, and each time, the g index is advanced. All files with the same base run-name share parameters and come from the same underlying SpikeGLX hardware run (a continuous stream of consecutive samples), so have a time relation that allows them to be sewn back together (but with possible gaps and/or overlaps that need to be trimmed). The metadata 'firstSample' item is the starting sample number of this file (in that common underlying stream). CatGT can sew g and t series files back together, but optionally also performs various types of filtering and artifact removal; notably tshift, which undoes a subsample shape distortion due to the probe's ADC multiplexing. CatGT works correctly and will be properly maintained to keep up with new types of probes and special handling considerations. It makes sense to support CatGT output. It makes less sense to bypass CatGT and duplicate its operations into multiple downstream components.

samuelgarcia commented 2 years ago

@TomBugnon : both SpikeGLXRawIO and SGLXRecordingExtractor have lazy loading. In short, you get all segment (triggers) but you can can reduce the trigger range with rec.select_segments(...). Have a look to si.core.segmentutils.py So I don't thing it is necessary to overload the reader signature with trigger slection. We can already do it after in a lazy way.

@billkarsh : thnak you for this precision. I will try to propagate this into neo reader and so it will be available in spiekinterface.

@billkarsh @TomBugnon : Also, note that we also have the sample_shift() implemented in a lazy way ditectly in spikeinterface. See https://spikeinterface.github.io/blog/spikeinterface-destripe/ So it could also be convinient to precompute the preprocessing directly in spkeinterface without the need of the saving everything into a big cached file. spiekinterface preprocessing have a parallel implementation wich is quite efficient. Maybe comparing output of CatGT and the equivalent SI could be usefull.

To also handle CatGT format we will need some testing files that will be public for CI purposes.

grahamfindlay commented 2 years ago

If some simple recordings in saline (i.e. non-neural) are okay to get started with, I can make you any testing files you need tomorrow. If I am following everything correctly, this should cover all the cases that might need handling:

(1) Multiple probes. 
(2) Multiple g-indices
x
(3) Multiple t-files
  (3a) Multiple, contiguous t-files
  (3b) Multiple, overlapping t-files
  (3c) Multiple, discontinuous t-files
x
(4) Folder-per-probe organization
  (4a) With folder-per-probe organization
  (4b) Without folder-per-probe organization

So this will be 6 runs in total. Assuming the minimum of 2 probes, 2 t-files, and 2 g-indices per run, with very short (~10s each) t-files, this will be ~2GB per run, so ~12GB total. That okay @samuelgarcia ?

samuelgarcia commented 2 years ago

Hi, thanks a lot for proposing this.

theses files will be push here : https://gin.g-node.org/NeuralEnsemble/ephy_testing_data

I was not aware of the (3b) possibility. Overlaping could be potentialy and conceptualy a problem. But I think it is a corner case.

Files have to be ultra small. They are downloaded by several project for continuous integration testing (neo, spikeinterface and nwb conversion tools) So even 10ms is enough for each file. If not, we will cut then manually. The most important for testing this parsing the meta file.

billkarsh commented 2 years ago

Guys, overlapping files is not a rare case. I really think you should let CatGT be your front end for ingesting SpikeGLX data. As the author of both programs I know exactly how they work and keep them in sync. That's my full time job. Why don't you want to use an existing and correct resource?

alejoe91 commented 2 years ago

@billkarsh I think we should allow also to loat CatGT processed data directly into SI. On the other hand, the sample/shift / destriping is also useful for NPIX Open Ephys data. Note that our implementation is ported from the IBL code base, which was yet another version of the same process.

grahamfindlay commented 2 years ago

Tom and I agree with Bill, that supporting CatGT-processed files is a good idea. As Tom mentioned, it is widely used, and as Bill mentioned, it is fastidiously maintained. I would add that it is also multi-platform and very fast. And many of the functions it performs are quite necessary for correctness (e.g. the demultiplexed common-average referencing). If CatGT-processed files were not supported, it would be necessary to re-implement much of its functionality. And that would almost certainly be more work than simply supporting CatGT.

Of course, it would be nice to just pip install spikeinterface and have everything you need to get started sorting, including all your pre-processing. And for someone working with multiple Neuropixel datasets, not all of which were collected with SpikeGLX, they might want a standardized preprocessing procedure for all data regardless of provenance, and not have the ability to use CatGT. So I also see clear value in SI-based preprocessing. FYI, for Open Ephys data there is also a C++ tool for demuxed CAR here, but I do not know if anyone outside of the Allen Institute uses it.

But in any case, I think we are all on the same page that it would be great for neo to support raw SpikeGLX data (i.e. before CatGT) in all the ways that it comes.

billkarsh commented 2 years ago

I want the community to have access to correct implementations, where that is possible. A few points. (1) Regarding "demultiplexed common-average referencing," that sounds like the old global demux, which is now replaced by the preferred (tshift + gblcar). (2) But in either case, each probe class/type has its own channel-to-ADC mapping, and doing any multiplexing correction needs an awareness of the probe type, which is in SpikeGLX metadata and observed in CatGT. (3) Does the IBL destriper case out probe type for all probes? (4) Does Open Ephys record probe type? (5) Just a thought. Instead of having the user first call CatGT and then have SI import that, why not have SI call CatGT? Admittedly the CatGT commandline is complicated and automating the call is helpful. (6) If SI could generate a SpikeGLX metadata file for Open Ephys data, then all data sets could be processed by CatGT. Not all SpikeGLX metadata items are needed for processing by CatGT, but if the needed ones are not available in an Open Ephys dataset then fixing that deficit is super valuable.

alejoe91 commented 2 years ago

@billkarsh let's have a chat about this! I would include @jsiegle and @oliche in the loop :) (btw IBL and SI have Tshift + global demux too)

alejoe91 commented 2 years ago

Another possibility would also be to collaborate on the SI side and make sure that the implementations match and are constantly up to date. The nice thing about the SI implementation is that it's lazy, so you just process the traces that you ask for (e.g. for visualization)

grahamfindlay commented 2 years ago

Yes, you're right @billkarsh. I'm stuck on the old name, but I mean tshift +gblcar. And actually, as you suggest, here in our lab we do something similar to what you describe. We build up a CatGT command (source here)* and then call the command line utility from Python. This makes things very user-friendly, especially for our users who aren't so comfortable with UNIX or the shell. But also is just very nice because all our data organization / path management / experiment info databases / processing pipelines that influence the CatGT call are in Python. Works really well for us, and our calls to SpikeInterface are basically wrappers that include CatGT handling.

For what it's worth, I think it would be really cool if the Python SpikeGLX DataFile tools that you and Jennifer distribute here were hosted on PyPI. Maybe such a utility could include an easy Python interface for calling CatGT.

By the way, re-reading this thread, I wonder if one of your questions what about why one would ever want to load the raw SpikeGLX (i.e. before CatGT) into Python. If so, the reason is just storage costs. I regularly do experiments that are 4+TB of raw data. If I have to keep that around for archival/backup purposes, and then also keep a CatGT-processed version around for analysis, it's pricey! So I basically run CatGT, write the output to a RAID0 NVME array, sort the CatGT output, then delete the CatGT files immediately. Then I load raw SpikeGLX LFPs for later analysis along side the spike sorted data. Sorry if that answered a question you weren't asking :)

*And yes, this is old, from back before you offered native CatGT for Linux, and we were running the Windows executable with Wine :) We now use the Linux utility.

billkarsh commented 2 years ago

I'm a super friendly guy, trust me. I don't have any free time and have a very full agenda already. I'm not keen on working on a parallel version in SI (or other) of what I already cleared off my plate in CatGT. It's a command line tool so that it can plug into other workflows. Also the source code is there which answers essentially all questions about how I actually do it, so attending meetings to say "how I think I did it" wouldn't even be as accurate. BTW, searching the CatGT sources for ["xxx"] finds CatGT reads of metadata items, so you can see what items are required and how they are parsed and interpreted. That's my first order answer while I reconsider, since I am trying to help as best I can.

grahamfindlay commented 2 years ago

@samuelgarcia Sample data got. I opened a GIN issue per the instructions at the link you provided.

grahamfindlay commented 2 years ago

I am also adding here a collection of CatGT output produced from these sample files. This output uses the latest version of CatGT (v3.0). It should be enough to properly test ingest of CatGT output in Neo (it covers concatenation of t-files within and across g-indices, concatenation of continuous and discontinuous recordings, supercat concatenation of multiple runs, and output in both folder-per-probe and single-folder format).

Data: sample_data_v2.zip

samuelgarcia commented 2 years ago

Hi Graham. Thanks a lot for this. I will push myself to gin repo. This look absolutly perfect. bin files are small and I think we cover many case.

samuelgarcia commented 2 years ago

@grahamfindlay Hi Graham, I have started to implemente the multi gate/multi trigger in neo. Have a look here https://github.com/NeuralEnsemble/python-neo/pull/1125 In the folder you provide: SpikeGLX/5-19-2022-CI0/5-19-2022-CI0_g0/5-19-2022-CI0_g0_imec1/

I have this file list

5-19-2022-CI0_g0_t0.imec1.ap.bin   5-19-2022-CI0_g0_t0.imec1.lf.meta  5-19-2022-CI0_g0_t1.imec1.lf.bin    5-19-2022-CI0_g0_tcat.imec1.ap.meta              5-19-2022-CI0_g0_tcat.imec1.lf.meta
5-19-2022-CI0_g0_t0.imec1.ap.meta  5-19-2022-CI0_g0_t1.imec1.ap.bin   5-19-2022-CI0_g0_t1.imec1.lf.meta   5-19-2022-CI0_g0_tcat.imec1.ap.xd_384_6_500.txt
5-19-2022-CI0_g0_t0.imec1.lf.bin   5-19-2022-CI0_g0_t1.imec1.ap.meta  5-19-2022-CI0_g0_tcat.imec1.ap.bin  5-19-2022-CI0_g0_tcat.imec1.lf.bin

Some files have the _tcat_ which do not follow the regular expression. Is it normal ?

billkarsh commented 2 years ago

CatGT output are labeled as _tcat. One of the CatGT operations sews a t-series together, e.g. _t0+_t1+_t2+...+_tN becomes _tcat, signifying "concatenated t's." But we generalize so that all CatGT processed output bin/meta files are labeled _tcat.

samuelgarcia commented 2 years ago

Thanks. It was my guess but in this dataset there are _tcat_ files in the folder SpikeGLX/5-19-2022-CI0/5-19-2022-CI0_g0/5-19-2022-CI0_g0_imec1 which is normally the direct output of spikeglx . @grahamfindlay : are theses _tcat_ files here by mistake in this particular folder ? Can I remove then ? Does CatGT do the conatenation in place (in the same folder) ?

grahamfindlay commented 2 years ago

Hi Sam,

Sorry about that, must have been a mistake. If you check the metadata for the _tcat_ files, you will actually find a field called catGTCmdline0 that stores the command used to generate the files. The difference between the the two sets of _tcat_ files here is that one was created by specifying a destination folder for the output, and the other was not.

 catGTCmdline0=<CatGT -dir=/Volumes/scratch/neuropixels/data/sample_data/5-19-2022-CI0 -run=5-19-2022-CI0 -g=0 -t=0,1 -prb_fld -ap -lf -prb=0:1>

 catGTCmdline0=<CatGT -dir=/Volumes/scratch/neuropixels/data/sample_data/SpikeGLX/5-19-2022-CI0 -run=5-19-2022-CI0 -g=0 -t=0,1 -prb_fld -ap -lf -prb=0:1 -zerofillmax=0 -no_auto_sync -dest=/Volumes/scratch/neuropixels/data/sample_data/CatGT/CatGT-A -out_prb_fld>

The second command format is the one I intended to use. I must have just forgot to remove the first files from the sample data. You can remove them. But in general, although I never actually do this, it is possible to have catGT output in the direct output folder if the -dest option is not specified.

Thanks, Graham

billkarsh commented 2 years ago

FWIW, CatGT sends output to the input folders by default and to an optional destination path if specified.

Having the input file set and the CatGT output for that set is an ideal opportunity to test that ingestion of these agree. If there is any difference that's a bug.

Also the nomenclature is "_tcat" There would never be a following underscore. This is always followed by a file type string (dot something).

samuelgarcia commented 2 years ago

@grahamfindlay @billkarsh : I propose to move the discussion where I did the code here : https://github.com/NeuralEnsemble/python-neo/pull/1125