Closed hummuscience closed 3 years ago
To use the anatomical masks from cellpose you'll need to make them into a "stat" array (with npix, ypix, xpix and lam) first. That part is untested so I might have ypix and xpix swapped.
import numpy as np
from scipy import stats
from suite2p.extraction.extract import compute_masks_and_extract_traces
# make stat array
stat = []
for u in np.unique(masks)[1:]:
ypix,xpix = np.nonzero(masks==u)
npix = len(ypix)
stat.append({'ypix': ypix, 'xpix': xpix, 'npix': npix, 'lam': np.ones(npix, np.float32)})
stat = np.array(stat)
# run extraction
F, Fneu,_,_ = compute_masks_and_extract_traces(ops, stat)
# subtract neuropil
dF = F - ops['neucoeff'] * Fneu
# compute activity statistics for classifier
sk = stats.skew(dF, axis=1)
sd = np.std(dF, axis=1)
for k in range(F.shape[0]):
stat[k]['skew'] = sk[k]
stat[k]['std'] = sd[k]
fpath = ops['save_path']
# save ops
np.save(ops['ops_path'], ops)
# save results
np.save(os.path.join(fpath,'F.npy'), F)
np.save(os.path.join(fpath,'Fneu.npy'), Fneu)
np.save(os.path.join(fpath,'stat.npy'), stat)
To mimick the whole pipeline I would probably have to run a registration step before the extraction?
Yeah, note you can turn off cell detection and just run registration by setting 'roidetect' to 0. Then run that script on the registered binary.
Ideally we'd seed the cells with anatomical info and use functional correlations to finalize the masks, but I can't give you a time frame for when that will be done, hopefully in the next month or two.
On Sun, Mar 1, 2020, 1:18 PM Muad Abd El Hay notifications@github.com wrote:
To mimick the whole pipeline I would probably have to run a registration step before the extraction?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MouseLand/suite2p/issues/292?email_source=notifications&email_token=ADS6TFITUIXM4MMAN64FZNDRFKRH7A5CNFSM4K7CKDOKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENNGRCI#issuecomment-593127561, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADS6TFMULE2KNRCXZXJQXDTRFKRH7ANCNFSM4K7CKDOA .
I tried it out as follows (and also with replaced ypix/xpix):
import numpy as np
from scipy import stats
from suite2p.extraction.extract import compute_masks_and_extract_traces
masks = np.load('Z:Muad/ca-imaging/exp182-20200211/5drg-wt-gradient/13_seg.npy', allow_pickle = True)
masks = masks[()]['masks']
ops = np.load('Z:Muad/ca-imaging/exp182-20200211/5drg-wt-gradient/RAW/suite2p/plane0/ops.npy', allow_pickle = True)
ops = ops[()]
# make stat array
stat = []
for u in np.unique(masks)[1:]:
ypix,xpix = np.nonzero(masks==u)
npix = len(ypix)
stat.append({'xpix': xpix, 'ypix': ypix, 'npix': npix, 'lam': np.ones(npix, np.float32)})
stat = np.array(stat)
# run extraction
F, Fneu,_,_ = compute_masks_and_extract_traces(ops, stat)
# subtract neuropil
dF = F - ops['neucoeff'] * Fneu
# compute activity statistics for classifier
sk = stats.skew(dF, axis=1)
sd = np.std(dF, axis=1)
for k in range(F.shape[0]):
stat[k]['skew'] = sk[k]
stat[k]['std'] = sd[k]
fpath = ops['save_path']
# save ops
np.save(ops['ops_path'], ops)
# save results
np.save(os.path.join(fpath,'F.npy'), F)
np.save(os.path.join(fpath,'Fneu.npy'), Fneu)
np.save(os.path.join(fpath,'stat.npy'), stat)
And got the following error:
Masks made in 15.38 sec.
Extracted fluorescence from 709 ROIs in 8464 frames, 106.60 sec.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-20-9515a478eaba> in <module>
20
21 # run extraction
---> 22 F, Fneu,_,_ = compute_masks_and_extract_traces(ops, stat)
23
24 # subtract neuropil
ValueError: too many values to unpack (expected 4)
Any idea what is causing it? I ran the pipeline with roidetect = 0
I used the cellpose masks from the first frame of the video before the registration. I wonder if it has something to do with that. If so, how do I extract the mean image from an ops object to feed into cellpose?
I managed to run it, the solution was simple, I just needed to finally get to understand python a little more.
masknew = masks[0]
ops1 = np.load('Z:Muad/ca-imaging/exp182-20200211/5drg-wt-gradient/RAW/suite2p/ops1.npy', allow_pickle = True)
ops = ops1[0]
# make stat array
stat = []
for u in np.unique(masknew)[1:]:
ypix,xpix = np.nonzero(masknew==u)
npix = len(xpix)
stat.append({'ypix': ypix, 'xpix': xpix, 'npix': npix, 'lam': np.ones(npix, np.float32)})
stat = np.array(stat)
# run extraction
F, Fneu,_,_, ops, stat = compute_masks_and_extract_traces(ops, stat)
# subtract neuropil
dF = F - ops['neucoeff'] * Fneu
# compute activity statistics for classifier
sk = stats.skew(dF, axis=1)
sd = np.std(dF, axis=1)
for k in range(F.shape[0]):
stat[k]['skew'] = sk[k]
stat[k]['std'] = sd[k]
fpath = ops['save_path']
# save ops
np.save(ops['ops_path'], ops)
# save results
np.save(os.path.join(fpath,'F.npy'), F)
np.save(os.path.join(fpath,'Fneu.npy'), Fneu)
np.save(os.path.join(fpath,'stat.npy'), stat)
I have a jupyter notebook as an example of running it on a single dataset if anyone is interested.
I wanted to open in in the suite2p GUI but it is lacking the iscell and spikes files. How can I generate those to be able to look at the results?
glad you got it to work sorry for the bug there. This should work:
np.save(os.path.join(fpath,'iscell.npy'), np.ones(len(stat))
np.save(os.path.join(fpath,'spks.npy'), np.zeros_like(F))
Btw, the above code was missing a )
np.save(os.path.join(fpath,'iscell.npy'), np.ones(len(stat)))
np.save(os.path.join(fpath,'spks.npy'), np.zeros_like(F))
Everything ran without errors. I then tried to open the results in the GUI nd got the following error in the terminal:
===== 2020.05.13 09:05:07 =====
Traceback (most recent call last):
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\menus.py", line 18, in <lambda>
loadProc.triggered.connect(lambda: io.load_dialog(parent))
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\io.py", line 171, in load_dialog
load_proc(parent)
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\io.py", line 209, in load_proc
probcell = iscell[:, 1]
IndexError: too many indices for array
Checking the output of np.ones(len(stat))
tells me that it is an array of only one column (shape
outputs (930,)
) while the normal shape of an iscell.npy file has two columns (shape
in a test file where the GUI opens outputs (4057, 2)
)
I did the following to produce an array where all probabilities are 1
np.save(os.path.join(fpath,'iscell.npy'), np.transpose(np.vstack((np.ones(len(stat)), np.ones(len(stat))))))
Now the following error shows up:
===== 2020.05.13 10:05:37 =====
Traceback (most recent call last):
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\menus.py", line 18, in <lambda>
loadProc.triggered.connect(lambda: io.load_dialog(parent))
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\io.py", line 171, in load_dialog
load_proc(parent)
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\io.py", line 246, in load_proc
make_masks_and_enable_buttons(parent)
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\io.py", line 42, in make_masks_and_enable_buttons
parent.stat[n]["med"],
KeyError: 'med'
I added this to the for loop from above:
# make stat array
stat = []
for u in np.unique(masknew)[1:]:
ypix,xpix = np.nonzero(masknew==u)
npix = len(xpix)
stat.append({'ypix': ypix, 'xpix': xpix, 'npix': npix, 'lam': np.ones(npix, np.float32), 'med': [np.mean(ypix), np.mean(xpix)]})
stat = np.array(stat)
Now the following error showed up:
===== 2020.05.13 10:05:30 =====
Traceback (most recent call last):
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\menus.py", line 18, in <lambda>
loadProc.triggered.connect(lambda: io.load_dialog(parent))
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\io.py", line 171, in load_dialog
load_proc(parent)
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\io.py", line 246, in load_proc
make_masks_and_enable_buttons(parent)
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\io.py", line 58, in make_masks_and_enable_buttons
views.init_views(parent)
File "C:\Users\Dr. Siemens\Anaconda3\envs\suite2p\lib\site-packages\suite2p\gui\views.py", line 64, in init_views
vcorr = parent.ops['Vcorr']
KeyError: 'Vcorr'
P.S: I hope you don't mind me using this as a log to getting this work. If it is getting annoying, please let me know.
Vcorr is calculated during sourcery/sparsedetect. I have a feeling that I would need to replace all the things that are calculated by the cell detection with with either their real values or some fake arrays.
Is that so or am I doing something fundamentally wrong?
@carsen-stringer any suggestions on how to get this working or should I wait until it is implemented?
Quick (and probably silly) question. ypix and xpix, what to they refer to in a mask? Is it considered like a normal array where X is columns and Y is rows? Or are they flipped in the case of images?
Additionally, does Y start from "upper left" or similar to a normal graph with thee origin being lower left?
As in, is there a difference in how images are read compared to a normal graph? (I hope I am clear here)
Yes, X is columns and Y is rows. Y starts from the "upper left".
I got it working so far using the mean image or the first image of the registered binary. First I run cellpose, manually curate (not much, about 5 minutes per FOV), and then run suite2p with the masks from cellpose!
Throwing some ideas around. Don't know if anyone finds this helpful.
I was rewatching the lectures about cell detection and was thinking about how one could use the masks from cellpose as a seed for suite2p. And the simplest approach to integrating them would be to use the centers of the cellpose masks as the "peaks" in the iterative process. The other approach would be to take the full mask, shrink it to 90% or something, and then extend it based on activity.
When I look at sparsedetect around line 352 a square is initiated based on the spatial scale of the detected peak. One could start here and surround each center with a square of fixed width/height (80% of the smallest expected cell size?) and then extend the ROI based on activity for final refinement. Similarly with the second approach, just starting later in the pipeline.
The only thing I couldn't figure out is how to implement it while ignoring the steps that come after (updating the residuals on the movie). Probably it needs a new function of extraction that ignores the usual steps of iterative extraction and threshold_scaling adaptation.
I was about to run some experiments using the code above but noticed that compute_masks_and_extract_traces
does not exist anymore.
It was replaced by extract_traces_from_masks
which requires the additional argument neuropil_masks
.
It seems that create_neuropil_masks
can be used to get a neuropil mask, but I do not understand how to get that with my current input.
This is what I am currently doing but is not working (because I lack the neuropil masks), any quick solution apart from downgrading to the version that used to have that function? @carsen-stringer @nickdelgrosso
Edit: rolled back to 0.7.1 to get it working again
from suite2p.extraction.extract import extract_traces_from_masks
opses = [None] * len(froot)
loaded = [None] * len(froot)
imgs = [None] * len(froot)
for i in range(len(froot)):
opses[i] = os.path.join(froot[i], "suite2p/plane0/ops.npy")
loaded[i] = np.load(opses[i], allow_pickle = True)
imgs[i] = [loaded[i][()]['meanImg']]
for fri in range(len(froot)):
masks = np.load(os.path.join(froot[fri],"suite2p/plane0/mean_seg.npy"), allow_pickle = True).item()['masks']
ops = loaded[fri].item()
# make stat array
stat = []
for u in np.unique(masks)[1:]:
ypix,xpix = np.nonzero(masks==u)
npix = len(xpix)
stat.append({'ypix': ypix, 'xpix': xpix, 'npix': npix, 'lam': np.ones(npix, np.float32), 'med': [np.mean(ypix), np.mean(xpix)]})
stat = np.array(stat)
# run extraction
F, Fneu,_,_, ops, stat = extract_traces_from_masks(ops, stat)
# subtract neuropil
dF = F - ops['neucoeff'] * Fneu
# compute activity statistics for classifier
sk = stats.skew(dF, axis=1)
sd = np.std(dF, axis=1)
for k in range(F.shape[0]):
stat[k]['skew'] = sk[k]
stat[k]['std'] = sd[k]
fpath = ops['save_path']
# save ops
np.save(ops['ops_path'], ops)
# save results
np.save(os.path.join(fpath,'F.npy'), F)
np.save(os.path.join(fpath,'Fneu.npy'), Fneu)
np.save(os.path.join(fpath,'stat.npy'), stat)
np.save(os.path.join(fpath,'iscell.npy'), np.transpose(np.vstack((np.ones(len(stat)), np.ones(len(stat))))))
np.save(os.path.join(fpath,'spks.npy'), np.zeros_like(F))
Hi @Cumol!
I came across a similar scenario where I have a binomial distribution of cell morphologies, one which is fantastically detected with suite2p and one which is better segmented by cellpose. Therefore it would be nice to be able to add the masks from cellpose as seeds for suite2p. Did you succed in building something for this? I will be happy to contribute to a pluggin for this.
Edit: I just realised this has been implemented recently in https://github.com/MouseLand/suite2p/commit/b571bd2b21f4c02a2042ea75884d839787d1359f ! Thank you so much @carsen-stringer , this cross functionality is great.
Hey @diegoasua! As you correctly pointed out, it has been implemented recently. It does seem, however, that it 'only' does the anatomical segmentation without functional refinement. It is somehow functional because it uses the mean image and the max projection.
There are two options for calculating the reference image for cellpose as described here on lines 47 to 55. For my data, 'anatomical_only' == 2 seems to work pretty well. I tried all options in a single dataset and ended up only adding 10 (800 total) cells via manual labeling.
I have a jupyter notebook as an example of running it on a single dataset if anyone is interested.
Hi @Cumol , I know this is from an old post - but it'd be great to take a look at your jupyter notebook if you don't mind sharing! thanks!
I didn't have much success seeding from anatomical -- most ROIs were already found with suite2p. therefore I added cellpose support that is independent from the functional segmentation.
to improve suite2p (functional) detection, try
ops['threshold_scaling']
ops['max_iterations']
ops['denoise']=1
Hi, I have a related issue - I am running Suite2p on videos taken of in vitro co-cultures with 2 cell types, and I already have separate sets of ROIs for each cell type. What I would like to do is run Suite2p separately on each cell type. I saw in the above discussion that some functionality has been implemented, but some of @Cumol's code was working and then stopped because of versioning issues. I know it's been about 2.5 years, but could anyone please provide any information on how to feed a set of ROIs to Suite2p? Any help is tremendously appreciated.
@carsen-stringer, @Cumol, @diegoasua, @alemessurier
Which functionality of Suite2p would you like to use? In my case, Cellpose + Suite2p took care of motion correction and ROI extracting.
Since you mentioned you are using cultures, I assume that neuropil contamination is not such an issue as they are probably not very dense (or am I mistaken)?
It might be easier to then just use Fiji/ImageJ, load in the different ROIs and just extract the traces from there (assuming your video is already motion corrected).
The spike deconvolution can be done separately using any of the published algorithms (If that is needed at all)
Thank you for the reply, @Cumol!
The cells are very dense, but there is no neuropil because they aren't neurons. The video is not motion-corrected and the cells move a little bit. Along with the motion, another issue with using Fiji/ImageJ is that there's a large number of experiments to analyze, and I already have a working Suite2p script to iterate over and and analyze them.
The specific functionality I would like is to load the ROIs I have into Suite2p, and not use Suite2p/CellPose for detection. Everything else I would keep unchanged in the Suite2p pipeline.
I've been working on this problem in a slightly different format, but this code might be helpful. In my case, I'm trying to use masks generated in cellpose and apply them to a video I've already motion corrected in suite2p. For some reason, cellpose outputs slightly different masks than the cellpose-powered "anatomical only" mode in suite2p. Go figure.
Basically, I'm creating a stat file from cellpose masks, then feeding it into the exposed wrapper functions demoed in this notebook.
You'll need to provide locations to a cellpose _seg.npy
file, as well as a folder containing suite2p outputs (data.bin
and ops.npy
, most importantly)
import numpy as np
from pathlib import Path
import suite2p
from suite2p.extraction.masks import create_cell_pix, create_neuropil_masks, create_masks, create_cell_mask
from suite2p.extraction.extract import extract_traces_from_masks
from suite2p.detection import roi_stats
# Read in mask data from cellpose
cellpose_fpath = "path/to/segmented/cellpose/file/yourfile_seg.npy"
cellpose_masks = np.load(cellpose_fpath, allow_pickle=True).item()
# Read in existing data from a suite2p run. We will use the "ops" and registered binary.
wd_path = 'path/to/your/suite2p/output/folder' # This should be the folder where processed suite2p files are saved. Consider making a backup copy of this folder before starting
wd = Path(wd_path)
ops = np.load(wd/'ops.npy', allow_pickle=True).item()
Lx = ops['Lx']
Ly = ops['Ly']
f_reg = suite2p.io.BinaryRWFile(Ly, Lx, wd/'data.bin')
# Using these inputs, we will first mimic the stat array made by suite2p
masks = cellpose_masks['masks']
stat = []
for u_ix, u in enumerate(np.unique(masks)[1:]):
ypix,xpix = np.nonzero(masks==u)
npix = len(ypix)
stat.append({'ypix': ypix, 'xpix': xpix, 'npix': npix, 'lam': np.ones(npix, np.float32), 'med': [np.mean(ypix), np.mean(xpix)]})
stat = np.array(stat)
stat = roi_stats(stat, Ly, Lx) # This function fills in remaining roi properties to make it compatible with the rest of the suite2p pipeline/GUI
# Using the constructed stat file, get masks
cell_masks, neuropil_masks = create_masks(stat, Ly, Lx, ops)
# Feed these values into the wrapper functions
stat_after_extraction, F, Fneu, F_chan2, Fneu_chan2 = suite2p.extraction_wrapper(stat, f_reg, f_reg_chan2 = None,ops=ops)
# Do cell classification
classfile = suite2p.classification.builtin_classfile
iscell = suite2p.classify(stat=stat_after_extraction, classfile=classfile)
# Apply preprocessing step for deconvolution
dF = F.copy() - ops['neucoeff']*Fneu
dF = suite2p.extraction.preprocess(
F=dF,
baseline=ops['baseline'],
win_baseline=ops['win_baseline'],
sig_baseline=ops['sig_baseline'],
fs=ops['fs'],
prctile_baseline=ops['prctile_baseline']
)
# Identify spikes
spks = suite2p.extraction.oasis(F=dF, batch_size=ops['batch_size'], tau=ops['tau'], fs=ops['fs'])
# Overwrite files in wd folder (consider backing up this folder first)
np.save(wd/'F.npy', F)
np.save(wd/'Fneu.npy', Fneu)
np.save(wd/'iscell.npy', iscell)
np.save(wd/'ops.npy', ops)
np.save(wd/'spks.npy', spks)
np.save(wd/'stat.npy', stat)
Btw, @neurochatter, you can pass your cellpose model straight into suite2p, maybe try that to see if it works better than letting suite2p+cellpose handle it (it might be that some settings changed over time). Another reason could be that some of the cellpsoe settings you use are different from the suite2p cellpose parameters. Have a look here: https://suite2p.readthedocs.io/en/latest/settings.html#cellpose-detection
Are you detecting the ROIs in cellpose based on the mean image, or just a frame in the video?
@carsen-stringer any idea what might be causing cellpose outputs to differ from running anatomical detection in suite2p?
I've been working on this problem in a slightly different format, but this code might be helpful. In my case, I'm trying to use masks generated in cellpose and apply them to a video I've already motion corrected in suite2p. For some reason, cellpose outputs slightly different masks than the cellpose-powered "anatomical only" mode in suite2p. Go figure.
Basically, I'm creating a stat file from cellpose masks, then feeding it into the exposed wrapper functions demoed in this notebook.
You'll need to provide locations to a cellpose
_seg.npy
file, as well as a folder containing suite2p outputs (data.bin
andops.npy
, most importantly)import numpy as np from pathlib import Path import suite2p from suite2p.extraction.masks import create_cell_pix, create_neuropil_masks, create_masks, create_cell_mask from suite2p.extraction.extract import extract_traces_from_masks from suite2p.detection import roi_stats # Read in mask data from cellpose cellpose_fpath = "path/to/segmented/cellpose/file/yourfile_seg.npy" cellpose_masks = np.load(cellpose_fpath, allow_pickle=True).item() # Read in existing data from a suite2p run. We will use the "ops" and registered binary. wd_path = 'path/to/your/suite2p/output/folder' # This should be the folder where processed suite2p files are saved. Consider making a backup copy of this folder before starting wd = Path(wd_path) ops = np.load(wd/'ops.npy', allow_pickle=True).item() Lx = ops['Lx'] Ly = ops['Ly'] f_reg = suite2p.io.BinaryRWFile(Ly, Lx, wd/'data.bin') # Using these inputs, we will first mimic the stat array made by suite2p masks = cellpose_masks['masks'] stat = [] for u_ix, u in enumerate(np.unique(masks)[1:]): ypix,xpix = np.nonzero(masks==u) npix = len(ypix) stat.append({'ypix': ypix, 'xpix': xpix, 'npix': npix, 'lam': np.ones(npix, np.float32), 'med': [np.mean(ypix), np.mean(xpix)]}) stat = np.array(stat) stat = roi_stats(stat, Ly, Lx) # This function fills in remaining roi properties to make it compatible with the rest of the suite2p pipeline/GUI # Using the constructed stat file, get masks cell_masks, neuropil_masks = create_masks(stat, Ly, Lx, ops) # Feed these values into the wrapper functions stat_after_extraction, F, Fneu, F_chan2, Fneu_chan2 = suite2p.extraction_wrapper(stat, f_reg, f_reg_chan2 = None,ops=ops) # Do cell classification classfile = suite2p.classification.builtin_classfile iscell = suite2p.classify(stat=stat_after_extraction, classfile=classfile) # Apply preprocessing step for deconvolution dF = F.copy() - ops['neucoeff']*Fneu dF = suite2p.extraction.preprocess( F=dF, baseline=ops['baseline'], win_baseline=ops['win_baseline'], sig_baseline=ops['sig_baseline'], fs=ops['fs'], prctile_baseline=ops['prctile_baseline'] ) # Identify spikes spks = suite2p.extraction.oasis(F=dF, batch_size=ops['batch_size'], tau=ops['tau'], fs=ops['fs']) # Overwrite files in wd folder (consider backing up this folder first) np.save(wd/'F.npy', F) np.save(wd/'Fneu.npy', Fneu) np.save(wd/'iscell.npy', iscell) np.save(wd/'ops.npy', ops) np.save(wd/'spks.npy', spks) np.save(wd/'stat.npy', stat)
Brilliant thank you @neurochatter
(continuation of issue https://github.com/MouseLand/cellpose/issues/25)
Cellpose does a very good job detecting cells in my type of data (monolayer neuronal culture).
How can these masks be used as seeds for suite2p?