padster / pyNeuroTrace

Python code for processing neural timeseries
https://padster.github.io/pyNeuroTrace/
Mozilla Public License 2.0
2 stars 2 forks source link
analytics neuroscience timeseries

PyNeuroTrace: Python code for Neural Timeseries

Tests DocumentationDOI

Installation

'pyNeuroTrace' can be installed with pip:

pip install pyNeuroTrace

GPU Supported functions use the Python Library Cupy. This library has the following requirements:

'pyNeuroTrace' can be installed with Cupy using pip:

pip install pyNeuroTrace[GPU]

If Cupy fails to build from the wheel using this command try installing Cupy first using a wheel that matches your CUDA Toolkit version ie:

pip install cupy-cuda12x

Followed by this command to install pyNeuroTrace

pip install pyNeuroTrace

For more information in on installing CuPy, see their CuPy installation documentation.

Documentation

To help get started using 'pyNeuroTrace', full documentation of the software and available modules and methods is available at our pyNeuroTrace github.io site.

Vizualization

Probably the most useful section of pyNeuroTrace, there are a number of visualization functions provided to help display the trace data in easy to understand formats. For more details, and visual examples of what is available, please consult the README next to viz.py.

Notebook utilities

Unrelated to neuron time series, but useful when using this regardless, PyNeuroTrace also provides a collection of tools for running analysis in Jupyter notebooks.

These include:

notebook.filePicker to selecting an existing file, notebook.newFilePicker for indicating a new path, and notebook.folderPicker for selecting an existing folder.

These all open PyQT file dialogs, to make it easy for users to interact with the file system. Customisation options exist, for example prompt and default locations.

showTabs(data, func, titles, progressBar=False)

Useful for performing analysis repeated multiple times across e.g. different neurons, experiments, conditions, ...etc. Given either a list, or dictionary, all items will be iterated over, and each individually drawn onto their own tab with the provided func. The method provided expects func(idx, key, value), where idx is the (0-based) index of the tab, key is the list/dictionary key, and value is what is to be processed.

Processing Data

Common per-trace processing filters are provided within filters.py. These are all designed to take a numpy array of traces, with each row an independent trace, and all return a filtered array of the same size.

These include:

filters.deltaFOverF0(traces, hz, t0, t1, t2)

Converts raw signal to the standard Delta-F-over-F0, using the technique given in Jia et al, 2011. The smoothing parameters (t0, t1, t2) are as described in the paper, all with units in seconds. Sample rate must also be provided to convert these to sample units.

filters.okada(traces)

Reduces noise in traces by smoothing single peaks or valleys, as described in Okada et al, 2016

Event Detection

Python implementations of the three algorithms discussed in our paper Sakaki et al, 2018 for finding events within Calcium fluorescent traces.

ewma(data, weight)

calculates the Exponentially-Weighted Moving Average for each trace, given how strongly to weight new points (vs. the previous average).

cusum(data, slack)

calculates the Cumulative Sum for each trace, taking a slack parameter which controls how far from the mean the signal the signal needs to be to not be considered noise.

matchedFilter(data, windowSize, A, tA, tB)

calculates the likelihood ratio for each sample to be the end of a window of expected transient shape, being a double exponential with amplitude A, rise-time tA, and decay-time tB (in samples).

The results of each of these three detection filters can then be passed through thresholdEvents(data, threshold), to register detected events whenever the filter strength increases above the given threshold.

Reading Data (lab-specific)

The code within this repository was designed to read data from experiments performed by the Kurt Haas lab at UBC. If you're from this lab, read below. If not, this part is probably not relevant, but fee free to ask if you'd be interested in loading your own data file formats.

A number of options for loading data files are available within files.py, including: