CYHSM / DeepInsight

A general framework for interpreting wide-band neural activity
MIT License
155 stars 33 forks source link

Question regarding data structures #5

Open elhananby opened 4 years ago

elhananby commented 4 years ago

Hi Markus, Would it be possible to get the sample .nwb file or some explanation about the data structure expected by DeepInsight (for the channels, timestamps, position, etc)? I have data from a different recording system (Neuralynx) and would be very interested in trying to use it.

Thanks and happy holidays, Elhanan

DiogoSantosPata commented 4 years ago

Similarly, is it feasible to use numpy epoched arrays ([channels, times])?

CYHSM commented 4 years ago

For now there are two functions which need to be adjusted for using it with a custom dataset (code taken from Readme):

# Load your electrophysiological or calcium-imaging data
(raw_data,
 raw_timestamps,
 output,
 output_timestamps,
 info) = deepinsight.util.tetrode.read_tetrode_data(fp_raw_file)

Replace the function read_tetrode_data with your function, e.g. read_neuralynx_data. Note that this function should return the raw_data (shape: N x M. N: number of timepoints, M : channels), the raw_timestamps indicating the timestamp for each point (shape: N) plus the desired output and the corresponding timestamps.

# Prepare outputs
deepinsight.util.tetrode.preprocess_output(fp_deepinsight, raw_timestamps, output, output_timestamps,
                                           sampling_rate=info['sampling_rate'])

Given the output from above, this function calculates alignment to the raw_data and writes the given outputs to a HDF5 file. Adjust this based on the behaviour/stimuli you want to decode to.

I will also upload a jupyter notebook soon to provide a clear example of the full workflow given a new dataset.

elhananby commented 4 years ago

UPDATE: nvm, got it running with tensorflow-gpu and it finished processing much quicker. Is there a specific reason why it's not configured to use the GPU in the first place?

Hi Markus, thanks for the help. I got it running on Google Colab for some testing, however the model loss/shuffled loss calculation phase either takes a very long time or something is broken. It just continues looping over the different models indefinitely, always returning "Performed 1079 gradient steps". Is it supposed to take this long, or is something wrong?

Thanks

edgarbc commented 4 years ago

Hi elhananby, I am trying to do something similar (run deepinsight from google colab over neuralnyx files), is there a chance you could share how you did it?

I am using neo to load the Neuralynx files and just pip installed deepInsight in google colab. Now trying to run an example...

Thanks!

Hi Markus, Would it be possible to get the sample .nwb file or some explanation about the data structure expected by DeepInsight (for the channels, timestamps, position, etc)? I have data from a different recording system (Neuralynx) and would be very interested in trying to use it.

Thanks and happy holidays, Elhanan