soltanianzadeh / STNeuroNet

Software for the paper "Fast and robust active neuron segmentation in two-photon calcium imaging using spatio-temporal deep learning," Proceedings of the National Academy of Sciences (PNAS), 2019.
https://www.pnas.org/content/early/2019/04/10/1812995116
Apache License 2.0
60 stars 26 forks source link

Creating new ground truth masks #16

Closed mtugsbayar closed 5 years ago

mtugsbayar commented 5 years ago

Hello! I'm interested in testing this for custom one-photon data, but I'm not entirely sure how the ground truth masks were prepared. Is there a way to use existing ground truth data in h5 directly as a mask, or do I have to manually relabel using the provided MATLAB GUI? Thank you.

soltanianzadeh commented 5 years ago

Current codes are written with the assumption that the masks are saved in .mat file, with each neuron mask represented in the first two dimension of the 3-dimentional binary matrix. You could rewrite the label generation code needed for training to meet your needs. Basically this function reads the neural masks and the calcium recording, extracts neural signal, removes neuropil contamination and then creates a "video" binary label. This "video label" is the same size as the calcium recording and at each frame of it, the masks for neurons active at that frame will be present. This video should be saved in .nii format for the training process.

Since neuropil contamination and cross-talk are stronger in one-photon data, you might need to use other spike detection methods out there.

mtugsbayar commented 5 years ago

Thank you for the fast reply! How were the initial .mat files created? Did you use the initial .json labels for neurofinder or did you manually rewrite it in .mat format with adjustments?

I was also wondering how the average neuron areas were determined for watershedding since neuron size is so variable. If I'm remembering correctly, neuron sizes being off can mess up some of the older algorithms like CNMF. Is there a good fix to that here or is neuron size something to be determined beforehand?

soltanianzadeh commented 5 years ago

For neurofinder, the .json labels were converted into .mat file and then manually inspected for errors with the Manual Labling GUI.

As for the average neuron size, it was determined based on the manual markings. The watershedding is needed for the overlapping cases. Our framework segments neurons in consecutive temporal intervals and since overlapping neurons segregate themselves in different time intervals, the neuron fusion process at the end will circumvent any possible errors in the watershed step.

mtugsbayar commented 5 years ago

That makes sense! Thank you very much. I'll keep you updated on how it goes if you are interested.