NeurodataWithoutBorders / matnwb

A Matlab interface for reading and writing NWB files
BSD 2-Clause "Simplified" License
49 stars 32 forks source link

Tutorial/Documentation for nwb.units #503

Closed GoktugAlkan closed 1 year ago

GoktugAlkan commented 1 year ago

Hello,

I want to add information about spiking units in obtained from our extracellular recording. I am trying to use this scheme as a guideline for the implementation in matnwb:

However, I am running into problems when trying to insert information on the waveforms (i.e. waveforms_index and waveforms_index_index). I would like to know if there is a tutorial, documentation, or a very basic example implementing nwb.unitswith spike_times, spike_times_index, waveforms_index_index, waveforms_index, and waveforms. Based on such a basic example I could try to figure out where the mistakes are.

Many thanks in advance.

lawrence-mbf commented 1 year ago

Hi @GoktugAlkan we don't have a specific tutorial in matnwb for the units table specifically (though @bendichter maybe there is one out there). However, you can learn more about working with DynamicTable objects more generally from the DynamicTables tutorial to get a better grasp of what these columns actually mean and how best to work with them. Note that index_index means that the property is a VectorIndex referencing another VectorIndex.

GoktugAlkan commented 1 year ago

Thanks @lawrence-mbf. Is there a way to store the customized DynamicTable that would contain all information about the spiking acitivity inside nwb.processing? Or is there another field in the nwb file that would be an appropriate place to store this kind of data?

I am asking this because the field nwb.intervals_trials is already occupied by some other data.

Thanks in advance!

lawrence-mbf commented 1 year ago

I think @bendichter or @CodyCBakerPhD can better help with this.

CodyCBakerPhD commented 1 year ago

I have no idea how to do this in MatNWB, personally.

In PyNWB, there is simply an index=2 argument for adding full waveforms to the units table

lawrence-mbf commented 1 year ago

Hi @GoktugAlkan I think if you wish to embed full waveforms you will have to recreate what addcolumn in pynwb does with the index=2 argument. Unfortunately this will require understanding how the data is actually laid out in the DynamicTables tutorial, specifically the stuff about ragged arrays which allow embedding multidimensional data to vertical rows.

GoktugAlkan commented 1 year ago

@lawrence-mbf I see. I think I could solve the problem if I knew where/how to store a customized DynamicTable object in the nwb file. Is it possible to create a ProcessingModule and store it there?Do you have an idea?

oruebel commented 1 year ago

addcolumn in pynwb does with the index=2 argument

The index=2 means that this is a ragged array that is ragged along 2 dimensions. The following figure illustrates how this works for waveforms

https://nwb-schema.readthedocs.io/en/latest/format_description.html#doubly-ragged-arrays

In the schema of the units table you'll see that there is the waveforms column which is of type VectorData to store the actual data values along with two optional waveforms_index and waveforms_index_index columns to describe how the waveforms data needs to be divided.

I believe the following tutorial illustrates how ragged arrays work in MatNWB

https://neurodatawithoutborders.github.io/matnwb/tutorials/html/ecephys.html#H_97F533F8

In this example for spike_times there is only one VectorIndex column. Depending on your use case you may need two VectorIndex columns for the waveforms, but the principle is the same.

GoktugAlkan commented 1 year ago

Thanks @oruebel. Acutally I tried this already step-by-step but there was an error message stating that it was not possible to resolve the references waveforms_index_index and waveforms_index. I'll try to send a toy code where I tried to implement this and also the error message corresponding to this.

I think the problem might also be solved by finding a place/field inside the nwb file where I could store a customized DynamicTable object. In the nwb.acquisition field, storing a DynamicTable object is not possible. In the nwb.processing field, it is also not possible. Storing a customized DynamicTable object would eventually be the most elegant way for our case.

oruebel commented 1 year ago

@bendichter thoughts?

GoktugAlkan commented 1 year ago

I was able to figure out a solution, which looks as follows:

  1. Create a dynamic table object (called my_table in the code) with columns spikeTimes (timestamps of detected spikes), waveforms (time-series of detected waveforms), and labels (name of unit and electrode that the detected waveforms correspond to).
  2. Create a ProcessingModule by executing processing_module = types.core.ProcessingModule('description', 'waveforms').
  3. Write my_table into processing_module.dynamictable by executing processing_module.dynamictable.set('spikeTable', my_table).
  4. Add the processing module to the nwb file by executing nwb.processing.set('spikeInfo', behavior_processing_module).
  5. Store nwb file.

This procedure is working right now. The table looks as in the attached picture and suits our needs well. Note that the table corresponds to a demo/invented dataset.

exampleDynTable
GoktugAlkan commented 1 year ago

I have just noticed that with the approach from my previous post, everything works fine, i.e., you can store the nwb file and read it again without any problems and any loss of data.

But when I try to read the same nwb file in python, the table cannot be constructed.

Do you have an idea why a working nwb file created with matnwbcauses a problem in pynwb?

The full error message that I get in pythonis below:

ConstructError: (root/processing/spikeInfo/spikeTable GroupBuilder {'attributes': {'colnames': array(['spikeTimes', 'waveforms', 'labels'], dtype=object), 'description': 'an example table', 'namespace': 'hdmf-common', 'neurodata_type': 'DynamicTable', 'object_id': 'da438744-2c4b-4ff6-ab8a-e7fa673ce226'}, 'groups': {}, 'datasets': {'id': root/processing/spikeInfo/spikeTable/id DatasetBuilder {'attributes': {'namespace': 'hdmf-common', 'neurodata_type': 'ElementIdentifiers', 'object_id': '7dfa7eca-c1eb-4279-a93e-ce41bc848d81'}, 'data': <HDF5 dataset "id": shape (120,), type "<i8">}, 'labels': root/processing/spikeInfo/spikeTable/labels DatasetBuilder {'attributes': {'description': 'labels', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '495a8063-a235-4d7f-90b7-34d1a958c08a'}, 'data': <StrDataset for HDF5 dataset "labels": shape (120,), type "|O">}, 'spikeTimes': root/processing/spikeInfo/spikeTable/spikeTimes DatasetBuilder {'attributes': {'description': 'spikeTimes', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '86ca1c35-5495-430a-a23d-46f34ac8f797'}, 'data': <HDF5 dataset "spikeTimes": shape (1, 120), type "<f8">}, 'waveforms': root/processing/spikeInfo/spikeTable/waveforms DatasetBuilder {'attributes': {'description': 'waveforms', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '60ee28ca-3fe0-4ebd-8248-e56f65e574db'}, 'data': <HDF5 dataset "waveforms": shape (30, 120), type "<f8">}}, 'links': {}}, 'Could not construct DynamicTable object due to: columns must be the same length')

Despite the fact that everything is working perfectly in matnwb, I checked the dimensions of the columns once again in matlab:

lawrence-mbf commented 1 year ago

@GoktugAlkan

Can you actually transpose waveforms and double check the size of spikeTimes? For compatibility with python, matnwb dimensions should be flipped when assigned as column data. From the pynwb error dump, it looks like those columns were not written correctly.

oruebel commented 1 year ago

I'll try to send a toy code where I tried to implement this and also the error message corresponding to this.

Thanks @GoktugAlkan that will be useful. Since NWB has the units table already defined for this kind of data it would be best to use it instead of creating a custom table. Having the data in the units table will make it easier for tools and other users to use the data.

GoktugAlkan commented 1 year ago

@lawrence-mbf spikeTimes has dimensions 120x1. Also, note that column id has dimensions 120x1. When I transpose waveforms, I get dimensions 30x120.

I can store the nwb with that table. However, when I try to read the nwb file with nwbRead and use the toTable function to visualize the table, I get an error in matlab.

Reading the same nwb file in pyhton results again in the following error:

ConstructError: (root/processing/spikeInfo/spikeTable GroupBuilder {'attributes': {'colnames': array(['spikeTimes', 'waveforms', 'labels'], dtype=object), 'description': 'an example table', 'namespace': 'hdmf-common', 'neurodata_type': 'DynamicTable', 'object_id': '29544f2a-1c85-46f9-9381-42e8ae63acfe'}, 'groups': {}, 'datasets': {'id': root/processing/spikeInfo/spikeTable/id DatasetBuilder {'attributes': {'namespace': 'hdmf-common', 'neurodata_type': 'ElementIdentifiers', 'object_id': '8aa37f1a-6c53-4277-b84d-8073b8e0f89f'}, 'data': <Closed HDF5 dataset>}, 'labels': root/processing/spikeInfo/spikeTable/labels DatasetBuilder {'attributes': {'description': 'labels', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '4e043ece-b71c-4b3f-8f37-ff1eddc6b2c2'}, 'data': <StrDataset for Closed HDF5 dataset>}, 'spikeTimes': root/processing/spikeInfo/spikeTable/spikeTimes DatasetBuilder {'attributes': {'description': 'spikeTimes', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': 'e7c127fe-9231-4bec-a979-7853a0401297'}, 'data': <Closed HDF5 dataset>}, 'waveforms': root/processing/spikeInfo/spikeTable/waveforms DatasetBuilder {'attributes': {'description': 'waveforms', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '33976f23-a0b2-4be4-848c-681d7e94e7f2'}, 'data': <Closed HDF5 dataset>}}, 'links': {}}, 'Could not construct DynamicTable object due to: columns must be the same length')

lawrence-mbf commented 1 year ago

When you can, please post the matlab error. You may have to tranpose all the other data too for toTable. There was old code that detected vectors specifically and wrote them properly to HDF5 using the first dimension but this may not be true anymore.

oruebel commented 1 year ago

Could not construct DynamicTable object due to: columns must be the same length When I transpose waveforms, I get dimensions 30x120.

I think this error is likely due to an error in the shape of the waveform dataset. The waveforms dataset should be a 1D dataset (i.e., the values should be flattened). I.e., in the NWB file, instead of a (120, 30) two-dimensional dataset, this should be a 1D dataset with shape (3600,). The values of the waveform_index dataset are then the indicies to select the waveforms. I.e., the values of the waveforms_index column follow the mapping such that the data associated with the first wavform is at waveforms[0: waveforms_index[0]], and the data associated with the second row is at waveforms[waveforms_index[0]: waveforms_index[1]] and so on. I.e., the waveforms_index here would be [30. 60, ..., 3600]

GoktugAlkan commented 1 year ago

@lawrence-mbf As you proposed, I flipped all the variables. Then the dimensions of the columns id, spikeTimes, waveforms, and labels are 1x120, 1x120, 30x120, and 1x120, respectively. Storing and reading in the data in matlab is not a problem as previously. However, using the toTable function results in the following error message in matlab:

 Error using io.space.segmentSelection
Expected input to be an array with all of the values <= 1.

Error in types.untyped.DataStub/load_mat_style (line 175)
                shapes = io.space.segmentSelection(varargin, dims); %#ok<PROPLC>

Error in indexing (line 380)
            data = obj.load_mat_style(CurrentSubRef.subs{:});

Error in indexing (line 302)
                data = obj.internal.stub(CurrentSubRef.subs{:});

Error in types.util.dynamictable.getRow>select (line 135)
        selected = Vector.data(selectInd{:});

Error in types.util.dynamictable.getRow (line 44)
    row{i} = select(DynamicTable, indexNames, ind);

Error in types.hdmf_common.DynamicTable/getRow (line 119)
        row = types.util.dynamictable.getRow(obj, id, varargin{:});

Error in types.util.dynamictable.nwbToTable (line 81)
matlabTable = [matlabTable DynamicTable.getRow( ...

Error in types.hdmf_common.DynamicTable/toTable (line 123)
        table = types.util.dynamictable.nwbToTable(obj, varargin{:});

Error in finalDemoDynamicTable (line 57)
toTable(readFile.processing.get('spikeInfo').dynamictable.get('spikeTable'))

However, reading the same nwb file in python and using to_dataframe works right now. Actually, I need to make sure that both are working since we want to share the files with our collaboraters (some of them work in pyhton, others in matlab)

lawrence-mbf commented 1 year ago

@oruebel 's suggestion of flattening the ragged array column may be a solution in that case. I have a hunch the toTable implementation also assumes vector data for table columns.

GoktugAlkan commented 1 year ago

@lawrence-mbf But the columns are VectorData objects.

oruebel commented 1 year ago

@lawrence-mbf good point. I was just referring to how the dataset should look in the HDF5 file. I'm not sure what MatNWB expects in toTable.

lawrence-mbf commented 1 year ago

@GoktugAlkan

But the columns are VectorData objects.

Which point are you referring to?

GoktugAlkan commented 1 year ago

@lawrence-mbf I thought that you were mentioning that the columns of the DynamicTableobject should be VectorDataobjects. That is the case in my implementation. I think I misunderstood your point.

lawrence-mbf commented 1 year ago

Sorry @GoktugAlkan , I mean that the data within the VectorData objects should hold vector (1-dimensional) data as opposed to matrices. Sorry, there's a bit of jargon jumbled in there. The point is that @oruebel 's solution might be what you need to resolve this particular error message for toTable. It's good that this works for pynwb though.

GoktugAlkan commented 1 year ago

@lawrence-mbf No worries, I get you right now. In the tutorial for DynamicTable, I saw a very similar implementation where one column was a matrix, i.e., the matrix was put into a VectorData object which then was fed into the DynamicTable object. Hence, I am wondering why I cannot just use the same approach for my setting, and guarantee that everything works fine both in matlab and python.

lawrence-mbf commented 1 year ago

I may need to look into this further then. Maybe this is a toTable bug introduced with the various addRow changes. I think the shape of the data in the file is what we want though so this might just a bug fix I need to do for toTable.

GoktugAlkan commented 1 year ago

I may need to look into this further then. Maybe this is a toTablebug introduced with the various addRowchanges. I think the shape of the data in the file is what we want though so this might just a bug fix I need to do for toTable.

@lawrence-mbf Thanks! I am looking forward to a solution

GoktugAlkan commented 1 year ago

Thanks @GoktugAlkan that will be useful. Since NWB has the units table already defined for this kind of data it would be best to use it instead of creating a custom table. Having the data in the units table will make it easier for tools and other users to use the data.

@oruebel I tried to implement again a toy example based on the schemes that you referred to. The code is based on the following toy example:

Two spikes are detected at different time points. These spikes correspond to the same unit. There are two waveforms (i.e. time-series) that describe these two spikes. The waveforms are read from the same electrode. The attached scheme visualizes this toy example:

DebugNWBUnits

I used the code below to feed in the information from above to nwb.units:

%% intialize nwb file
nwb = NwbFile( ...
    'session_description', 'mouse in open exploration',...
    'identifier', ['Mouse5_Day3'], ...
    'session_start_time', datetime(2018, 4, 25, 2, 30, 3), ...
    'general_experimenter', 'My Name', ... % optional
    'general_session_id', 'session_1234', ... % optional
    'general_institution', 'University of My Institution', ... % optional
    'general_related_publications', 'DOI:10.1016/j.neuron.2016.12.011');

%% create spike_times and spike_times_index

%%% one unit is detected spiking at timepoints 11 and 222
spikes = {[11, 222]};      

%%% create VectorData and VectorIndex
[spike_times_vector, spike_times_index] = util.create_indexed_column(spikes); 

%%% create waveforms and waveforms_index

%%% the two detected spikes correspond to two waveforms. Hence, the object
%%% below has randn(2,30) as data (i.e. each waveform is a time-series with
%%% timepoints
waveforms = types.hdmf_common.VectorData( ...
        'data', randn(2,30), ...
        'description', 'none' ...
    );

%%% create waveforms_index
data_waveforms_index = [1;1];
data_waveforms_index = uint64(cumsum(data_waveforms_index));

ov = types.untyped.ObjectView(waveforms);
waveforms_index = types.hdmf_common.VectorIndex( ...
    'data', data_waveforms_index, ...
    'target', ov, ...
    'description', 'indexes data' ...
);

%%% create VectorData that waveforms_index_index can reference to
data_vector_waveforms_index_index = types.hdmf_common.VectorData( ...
        'data', [1;2], ...
        'description', 'none' ...
    );

%%% create waveforms_index_index
data_waveforms_index_index  = uint64(2);
ov = types.untyped.ObjectView(data_vector_waveforms_index_index);
waveforms_index_index = types.hdmf_common.VectorIndex( ...
    'data', data_waveforms_index_index, ...
    'target', ov, ...
    'description', 'indexes data' ...
);

%% feed information into nwb.units
nwb.units = types.core.Units( ...
    'colnames', {'spike_times','waveforms'}, ...
    'description', 'units table', ...
    'id', types.hdmf_common.ElementIdentifiers( ...
        'data', int64(0:length(spikes) - 1)' ...
    ), ...
    'spike_times', spike_times_vector, ...
    'spike_times_index', spike_times_index, ...
    'waveforms', waveforms, ...
    'waveforms_index', waveforms_index, ...
    'waveforms_index_index', waveforms_index_index);

A brief explanation of the code:

  1. In the first section (indicated by %% intialize nwb file) the nwb file is initialized.
  2. In the second section (indicated by %% create spike_times and spike_times_index), the columns spike_times_index, spike_times, waveforms_index_index, waveforms_index, and waveforms are created. Note that for waveforms a matrix of dimensions 2x30 was used where each row corresponds to one of the two spikes.
  3. In the last section (indicated by %% feed information into nwb.units), the information is fed into the field nwb.units. Based on the tutorial on ragged arrays, for colnames, I used spike_times and waveforms.

Executing the last section of the code results in the following error message:

Error using assert
Invalid table detected: column heights (vector lengths or number of matrix columns) must be the same.

Error in types.util.dynamictable.checkConfig (line 50)
    assert(isscalar(tableHeight), ...

Error in types.core.Units (line 60)
            types.util.dynamictable.checkConfig(obj);

Error in testRaggedArray (line 58)
nwb.units = types.core.Units( ...

Remarks:

Further observations: The source of the problem may be the data inside waveforms. When I do not include waveforms in nwb.units, the code executes without an error message. However, when trying to export this nwb file, I get the following error message:

Error using NwbFile/resolveReferences
object(s) could not be created:
    /units/waveforms_index
    /units/waveforms_index_index

The listed object(s) above contain an ObjectView, RegionView , or SoftLink object that has failed to resolve itself. Please check for any references that were
not assigned to the root  NwbFile or if any of the above paths are incorrect.

Error in NwbFile/export (line 63)
                obj.resolveReferences(output_file_id, refs);

Error in nwbExport (line 36)
    nwb(i).export(filename);

Error in testRaggedArray (line 72)
nwbExport(nwb, ff);

Many thanks in advance for your help and patience! I think it would be very helpful if you could make a very short, toy example for nwb.units (@lawrence-mbf ), where you include both spike time and waveform information. Users may then build on that toy example.

GoktugAlkan commented 1 year ago

@oruebel @lawrence-mbf To have an alternative solution, I tested creating an nwb file in pyNWB, i.e., I ran the following code that is a copy from one of your tutorials:

nwbfile = NWBFile(
    session_description="my first synthetic recording",
    identifier=str('ALKAN'),
    session_start_time=datetime.now(tzlocal()),
    experimenter=[
        "Baggins, Bilbo",
    ],
    lab="Bag End Laboratory",
    institution="University of Middle Earth at the Shire",
    experiment_description="I went on an adventure to reclaim vast treasures.",
    session_id="LONELYMTN001",
)

nwbfile.add_unit_column(name="quality", description="sorting quality")

poisson_lambda = 20
firing_rate = 20
n_units = 10
for n_units_per_shank in range(n_units):
    n_spikes = np.random.poisson(lam=poisson_lambda)
    spike_times = np.round(
        np.cumsum(np.random.exponential(1 / firing_rate, n_spikes)), 5
    )

    wvForm = np.random.normal(loc=0.0, scale=1.0, size=(spike_times.shape[0], 4))

    nwbfile.add_unit(
        spike_times=spike_times, quality="good", waveform_mean=[1.0, 2.0, 3.0, 4.0, 5.0])

nwbfile.units.to_dataframe()

with pynwb.NWBHDF5IO(r"/home/matlab/Docker_shared/NWB Project/spikes/nwbStandard/unitsTest_PYNWB.nwb", "w") as io:
    io.write(nwbfile)

This code runs without problems in python. However, when reading the exported nwb file in matlab, one gets the following error message:

Error using assert
Unexpected properties {unit}.

Your schema version may be incompatible with the file.  Consider checking the schema version of the file with `util.getSchemaVersion(filename)` and comparing
with the YAML namespace version present in nwb-schema/core/nwb.namespace.yaml

Error in types.util.checkUnset (line 13)
assert(isempty(dropped),...

Error in types.hdmf_common.VectorData (line 24)
            types.util.checkUnset(obj, unique(varargin(1:2:end)));

Error in io.parseDataset (line 81)
    parsed = eval([Type.typename '(kwargs{:})']);

Error in io.parseGroup (line 22)
    dataset = io.parseDataset(filename, datasetInfo, fullPath, Blacklist);

Error in io.parseGroup (line 38)
    subg = io.parseGroup(filename, group, Blacklist);

Error in nwbRead (line 59)
nwb = io.parseGroup(filename, h5info(filename), Blacklist);

Error in testRaggedArray (line 77)
readFile = nwbRead(ffPYNWB);
lawrence-mbf commented 1 year ago

Regarding the pynwb recreation, this is an old issue that we're currently working on a solution for (#43). The issue is that the specific Units table defines a unique VectorData that adds a units property and is currently unreadable in MatNWB though it's not a big deal to hack in.

Regarding the waveforms code, VectorIndex objects need a valid ObjectView pointing to a VectorData object. In your case you just need to set the target property in waveforms_index to a types.untyped.ObjectView like so:

VectorIndex = types.hdmf_common.VectorIndex('target', types.untyped.ObjectView('/units/waveforms'),'data', []);

I usually point to using the addRow or addColumn methods instead of raw tables but if they're not compatible with many others' workflows I can consider a toy example maybe.

GoktugAlkan commented 1 year ago

@lawrence-mbf Thanks a lot! The code is working right now. In addition, it is also possible to read the exported nwb file in pyhton without problems. Below is the correct code based on your suggestion:

%% THIS IS A CODE IN MATLAB
%% intialize nwb file
nwb = NwbFile( ...
    'session_description', 'mouse in open exploration',...
    'identifier', ['Mouse5_Day3'], ...
    'session_start_time', datetime(2018, 4, 25, 2, 30, 3), ...
    'general_experimenter', 'My Name', ... % optional
    'general_session_id', 'session_1234', ... % optional
    'general_institution', 'University of My Institution', ... % optional
    'general_related_publications', 'DOI:10.1016/j.neuron.2016.12.011');

%% create spike_times and spike_times_index

%%% one unit is detected spiking at timepoints 11 and 222
spikes = {[11, 222]};      

%%% create VectorData and VectorIndex
[spike_times_vector, spike_times_index] = util.create_indexed_column(spikes); 

%% create waveforms and waveforms_index

%%% the two detected spikes correspond to two waveforms. Hence, the object
%%% below has randn(2,30) as data (i.e. each waveform is a time-series with
%%% timepoints
waveforms = types.hdmf_common.VectorData( ...
        'data', randn(2,30), ...
        'description', 'none' ...
    );

waveforms_index = types.hdmf_common.VectorIndex('target', types.untyped.ObjectView('/units/waveforms'),'data', [1;2], 'description', 'test');

waveforms_index_index = types.hdmf_common.VectorIndex('target', types.untyped.ObjectView('/units/waveforms_index'),'data', 2, 'description', 'test');

%% feed information into nwb.units
nwb.units = types.core.Units( ...
    'colnames', {'spike_times', 'waveforms'}, ...
    'description', 'units table', ...
    'id', types.hdmf_common.ElementIdentifiers( ...
        'data', int64(0:length(spikes) - 1)' ...
    ), ...
    'spike_times', spike_times_vector, ...
    'spike_times_index', spike_times_index, ...,
    'waveforms', waveforms, ...
    'waveforms_index', waveforms_index, ...
    'waveforms_index_index', waveforms_index_index);

%% export nwb file
ff = '/home/matlab/Docker_shared/NWB Project/spikes/nwbStandard/unitsTest.nwb';
nwbExport(nwb, ff);

%%
readFile = nwbRead(ff);
lawrence-mbf commented 1 year ago

Awesome! I'm glad this got sorted out! I've opened #505 for the toTable bug. As mentioned before, I think I will implement the hack for VectorData units in the near future to at least allow using Units tables from pynwb. Did you have further issues other than the ones listed here?

GoktugAlkan commented 1 year ago

@lawrence-mbf Thanks a lot, everything works for now.

As a summary: We had pre-exisiting nwb files that we wanted to expand by adding the corresponding information about the extracted spikes, i.e., we wanted to populate the field nwb.units, which we hadn't populated before. Right now, we can take the pre-existing nwb file and add the data about the spikes to the file and store it again on the same path. The spike information contains everything that we need: the spike times, the waveforms, and the labels of the units and electrodes.

Furthermore, it is possible to read the modified nwb file in pyNWB without problems and access all fields without loss of any data.

Thanks a lot!

lawrence-mbf commented 1 year ago

The fix will coincide with the fix for #238 so this one is now closed.