JinghaoLu / MIN1PIPE

A MINiscope 1-photon-based Calcium Imaging Signal Extraction PIPEline.
GNU General Public License v3.0
56 stars 25 forks source link

Missing `aflag` when `spatialr` and `se` are not empty. And inconsistent spatial downsampling. #69

Closed tuanpham96 closed 2 years ago

tuanpham96 commented 2 years ago

Issue 1 - missing aflag

When both spatialr and se are passed in and not empty (i.e. not []), and assuming overwrite_flag = true on line 74, line 88 would not run because aflag was not defined.

https://github.com/JinghaoLu/MIN1PIPE/blob/119052a5649276d418baadf1926743d3eb0604a5/min1pipe.m#L88

My modification for this to work is an addition to line 13:

defpar = default_parameters;
aflag = false; % added change here

Question: Is this the right assumption/way to do it?

Issue/question 2 - inconsistent spatial downsampling

I ran 2 videos from inscopix miniscope initially with spatialr = [] and se = [] (i.e. pretty much the default demo_min1pipe.m). Both were downsampled but the rates were different, and different from the default parameter:

Question: Can you explain why there would be such a discrepancy?

Final note

Right now I'm setting directly spatialr = 1 to see what happens.

And I also set se = 9. Is this the right setting for inscopix? I followed your comment below

https://github.com/JinghaoLu/MIN1PIPE/blob/119052a5649276d418baadf1926743d3eb0604a5/min1pipe_HPC.m#L46

Thank you!

JinghaoLu commented 2 years ago

Hi Thanks for the questions.

For issue 1: you are right and the codes have been modified.

For issue 2: when you set the two parameters to be empty, the code uses auto-downsampling and that is totally based on the video you are using. Right now the downsampling factor estimation is a bit rigid so you will see the downsampling is different for two different videos of the same microscope setup. That is totally different from the default parameter, which is a hard number 0.5.

There is no fixed setting for microscope but the combination of microscope and brain region. I would suggest if you want a fixed, same downsampling across videos, you can pass the parameters instead of leaving them empty. And you will want to set the spatialr the smaller the better, and se to be ~3-5.

tuanpham96 commented 2 years ago

Great! Thanks for the confirmation on issue 1.

Right now the downsampling factor estimation is a bit rigid so you will see the downsampling is different for two different videos of the same microscope setup. That is totally different from the default parameter, which is a hard number 0.5.

There is no fixed setting for microscope but the combination of microscope and brain region. I would suggest if you want a fixed, same downsampling across videos, you can pass the parameters instead of leaving them empty.

Just to confirm, that means if I set spatialr to a fixed value 0.4 instead of passing [], the resulting downsampling factor would be constant, and near to 0.4, instead of being estimated that might turn out differently. Is that correct?

[...] and se to be ~3-5.

Can you elaborate on what this is exactly and what unit it is in? Is it an estimate of the neuron diameter in pixels? If I downsample spatially, does that mean se also have to scale?

Update: I tried spatialr = 1, se = 9 and it turned out horrible :(. The _reg.mat data looked like it was heavily filtered. And the resulting ROIs' sizes are very tiny. I'm assuming that means I need to upscale it to match with non-spatially-sampled neuron size. Is that right?

Unrelated question about progress saving: With a few videos that I ran, a lot of time was spent on motion correction, specifically during both intra- and inter-section processing/registration. Especially the latter one, after the message data prep is done, it took a significant portion of time (even on downsampled data) to even see the next progress print.

Is there anyway of saving the intermediate of these in case something happens?

Thanks!

tuanpham96 commented 2 years ago

I apologized for closing it prematurely. It was an accidental click. Please see the above for some of the questions I still have.

JinghaoLu commented 2 years ago

Great! Thanks for the confirmation on issue 1.

Right now the downsampling factor estimation is a bit rigid so you will see the downsampling is different for two different videos of the same microscope setup. That is totally different from the default parameter, which is a hard number 0.5. There is no fixed setting for microscope but the combination of microscope and brain region. I would suggest if you want a fixed, same downsampling across videos, you can pass the parameters instead of leaving them empty.

Just to confirm, that means if I set spatialr to a fixed value 0.4 instead of passing [], the resulting downsampling factor would be constant, and near to 0.4, instead of being estimated that might turn out differently. Is that correct?

That's right.

[...] and se to be ~3-5.

Can you elaborate on what this is exactly and what unit it is in? Is it an estimate of the neuron diameter in pixels? If I downsample spatially, does that mean se also have to scale?

You can refer to the readme page, subsection "key parameters" under section usage. In brief, you don't need to consider unit as in running codes there is no unit to put. So the short answer is yes, it is neuron radius in pixels after downsampling. And yes, you need to adjust se if spatialr is changed.

Update: I tried spatialr = 1, se = 9 and it turned out horrible :(. The _reg.mat data looked like it was heavily filtered. And the resulting ROIs' sizes are very tiny. I'm assuming that means I need to upscale it to match with non-spatially-sampled neuron size. Is that right?

Unrelated question about progress saving: With a few videos that I ran, a lot of time was spent on motion correction, specifically during both intra- and inter-section processing/registration. Especially the latter one, after the message data prep is done, it took a significant portion of time (even on downsampled data) to even see the next progress print.

Is there anyway of saving the intermediate of these in case something happens?

The potential answer to these questions might be that you need to do spatial downsampling. All the algorithms are based on parameter setting, which means if you do not set appropriate parameters, there is almost guarantee you will not have desired results. So spatially downsample the video till the typical neuron radius is ~3-5 in pixel. Also that means you have much smaller data size for motion correction so you save time (The purpose of motion correction in MIN1PIPE is to enforce accuracy at the cost of time, so if you don't have huge motion, you can turn it off).

The majority of time is spent in motion correction; that is true, but I have no idea what your perception of time is, so I cannot judge whether "a significant portion of time" is appropriate or something else happened preventing the running of the codes.

Thanks!

tuanpham96 commented 2 years ago

Thanks for your answers! I think I'm going to play around with those 2 parameters on a trimmed down version of the videos to figure out the best combinations.

The majority of time is spent in motion correction; that is true, but I have no idea what your perception of time is, so I cannot judge whether "a significant portion of time" is appropriate or something else happened preventing the running of the codes.

To clarify, these are 20Hz-30min videos. The first few steps before registration usually take around 30min to and hour, but registration can take between 2-3 hours even after downsampling, and sometimes it is unclear what the progress is with the intersection part.

I'm still testing the program, which is really amazing by the way, but I should have tested on the trimmed version instead, like I said above. Once I figure the right combinations, I can just leave it running for the day or submit it elsewhere.

Anyway, thanks for taking the time to answer! I'll close the issue now.

JinghaoLu commented 2 years ago

To me that running time seems normal, if you have fewer CPU cores. To speed up, you will need more cores for parallel computing (you will see a boost in speed if you submit it to multicore HPC). There is no subsections in intersection part; it is just running through sections throughout the video frames.