Pipeline for processing two-photon calcium imaging data. Copyright (C) 2018 Howard Hughes Medical Institute Janelia Research Campus
suite2p includes the following modules:
This code was written by Carsen Stringer and Marius Pachitariu. For support, please open an issue.
The reference paper is here. The deconvolution algorithm is based on this paper, with settings based on this paper.
You can now run suite2p in google colab, no need to locally install (although we recommend doing so eventually): . Note you do not have access to the GUI via google colab, but you can download the processed files and view them locally in the GUI.
See this twitter thread for GUI demonstrations.
The matlab version is available here. Note that the algorithm is older and will not work as well on non-circular ROIs.
Lectures on how suite2p works are available here.
Note on pull requests: we accept very few pull requests due to the maintenance efforts required to support new code, and we do not accept pull requests from automated code checkers. If you wrote code that interfaces/changes suite2p behavior, a common approach would be to keep that in a fork and pull periodically from the main branch to make sure you have the latest updates.
If you use this package in your research, please cite the paper:
Pachitariu, M., Stringer, C., Schröder, S., Dipoppa, M., Rossi, L. F., Carandini, M., & Harris, K. D. (2016). Suite2p: beyond 10,000 neurons with standard two-photon microscopy. BioRxiv, 061507.
conda
for python 3 in the pathconda create --name suite2p python=3.9
.conda activate suite2p
python -m pip install suite2p
.python -m pip install suite2p[gui]
. If you're on a zsh server, you may need to use ' '
around the suite2p[gui] call: python -m pip install 'suite2p[gui]'
. This also installs the NWB dependencies.python -m suite2p
and you're all set.suite2p --version
in the terminal will print the install version of suite2p.For additional dependencies, like h5py, NWB, Scanbox, and server job support, use the command python -m pip install suite2p[io]
.
If you have an older suite2p
environment you can remove it with conda env remove -n suite2p
before creating a new one.
Note you will always have to run conda activate suite2p before you run suite2p. Conda ensures mkl_fft and numba run correctly and quickly on your machine. If you want to run jupyter notebooks in this environment, then also conda install jupyter
.
To upgrade the suite2p (package here), run the following in the environment:
pip install --upgrade suite2p
This package relies on the awesomeness of pyqtgraph, PyQt6, torch, numpy, numba, scanimage-tiff-reader, scipy, scikit-learn, tifffile, natsort, and our neural visualization tool rastermap. You can pip install or conda install all of these packages. If having issues with PyQt6, then try to install within it conda install pyqt. On Ubuntu you may need to sudo apt-get install libegl1
to support PyQt6. Alternatively, you can use PyQt5 by running pip uninstall PyQt6
and pip install PyQt5
. If you already have a PyQt version installed, suite2p will not install a new one.
The software has been heavily tested on Windows 10 and Ubuntu 18.04, and less well tested on Mac OS. Please post an issue if you have installation problems.
The simplest way is
pip install git+https://github.com/MouseLand/suite2p.git
If you want to download and edit the code, and use that version,
cd suite2p
pip install -e .
in that foldercd suite2p
in an anaconda prompt / command prompt with conda
for python 3 in the pathconda create --name suite2p python=3.9
conda activate suite2p
(you will have to activate every time you want to run suite2p)pip install -e .[all]
python setup.py test
or pytest -vs
, this will automatically download the test data into your suite2p
folder. The test data is split into two parts: test inputs and expected test outputs which will be downloaded in data/test_inputs
and data/test_outputs
respectively. The .zip files for these two parts can be downloaded from these links: test_inputs and test_outputs.An example dataset is provided here. It's a single-plane, single-channel recording.
The quickest way to start is to open the GUI from a command line terminal. You might need to open an anaconda prompt if you did not add anaconda to the path. Make sure to run this from a directory in which you have WRITE access (suite2p saves a couple temporary files in your current directory):
suite2p
Then:
nplanes, nchannels, tau, fs
The suite2p output goes to a folder called "suite2p" inside your save_path, which by default is the same as the data_path. If you ran suite2p in the GUI, it loads the results automatically. Otherwise, you can load the results with File -> Load results or by dragging and dropping the stat.npy file into the GUI.
The GUI serves two main functions:
Main GUI controls (works in all views):
You can add your manual curation to a pre-built classifier by clicking "Add current data to classifier". Or you can make a brand-new classifier from a list of "iscell.npy" files that you've manually curated. The default classifier in the GUI is initialized as the suite2p classifier, but you can overwrite it by adding to it, or loading a different classifier and saving it as the default. The default classifier is used in the pipeline to produce the initial "iscell.npy" file.
From the command line:
suite2p --ops <path to ops.npy> --db <path to db.npy>
From Python/Jupyter
from suite2p.run_s2p import run_s2p
ops1 = run_s2p(ops, db)
See our example jupyter notebook here.
F.npy: array of fluorescence traces (ROIs by timepoints)
Fneu.npy: array of neuropil fluorescence traces (ROIs by timepoints)
spks.npy: array of deconvolved traces (ROIs by timepoints)
stat.npy: array of statistics computed for each cell (ROIs by 1)
ops.npy: options and intermediate outputs
iscell.npy: specifies whether an ROI is a cell, first column is 0/1, and second column is probability that the ROI is a cell based on the default classifier
Copyright (C) 2023 Howard Hughes Medical Institute Janelia Research Campus, the labs of Carsen Stringer and Marius Pachitariu.
This code is licensed under GPL v3 (no redistribution without credit, and no redistribution in private repos, see the license for more details).
Logo was designed by Shelby Stringer and Chris Czaja.