INM-6 / python-neo

Neo is a package for representing electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats
http://packages.python.org/neo/
BSD 3-Clause "New" or "Revised" License
3 stars 10 forks source link

TODO List for GDF IO development #3

Closed JuliaSprenger closed 7 years ago

JuliaSprenger commented 9 years ago

GDF IO Development 1) is .gdf the correct file format? -> go with it for now, evtl change later 2) number of columns does not tell the content of columns -> default: two column structure (gid, time) 3) time unit of time stamps? -> always ms (3 digits), except if time stamps are given in simulation time steps

TODO: -Unittests --4 testcases of input (columns...) (done) --check if data types (int, float) are read properly (done) --check specific spike times of single neurons. for one id, load with np.loadtxt and compare spike trains (done) --test segments: empty list, list of subset of ids + request all neurons... (done) --wrong user input (done) --create spike train only for a time interval between t_start and t_stop (done) --conductance-based neurons for reading out V_m and g_ex --assign spikes to neurons with a different routine (done) --upload test data to g-node server (contact Thomas Wachtler). --run gdfio test based on remote data (see blackrockio for implementation details)

-NESTio --read more file formats (.dat for membrane potential) --read_segment() for spikes and analogue signals is currently overwritten

-Bugs --t_start and t_stop have different units (done)

-Documentation

-NEST core meeting: class hierarchy, which output, which file format, which metadata --main idea: NESTio shall be able to load all data types that NEST can write

Neo related: SpikeTrain instance with t_stop = None does not complain directly ST has inconsistent time handling st([1,2,3]_pq.ms,t_start=10_pq.s,t_stop=0.1) results in t_stop = 0.1*pq.ms -> job for long/alper/michael session...?

DONES for NESTIO:

TODO for NESTIO:

mschmidt87 commented 9 years ago

Commited 6b5e34fce774b5d64aa97780a4702aed219c54b9 with first work.

JuliaSprenger commented 7 years ago

Alternative implementation idea for get_columns in ColumnIO: Index all data rows during initialization and reimplement get_columns based on this. To be tested for performance issues at some time.

    # def create_index_for_column(self,column_id):
    #     if column_id > self.data.shape[1]:
    #         raise ValueError('Column index out of range of data sets')
    #
    #     if column_id not in self.indexed_columns:
    #         contained_values = np.unique(self.data[:,column_id])
    #
    #     else:
    #         return self.indexed_columns[column_id]