Open ZyHUpenn opened 1 year ago
I do the average in cell's region and there seems to be many irregular spikes, then I was wondering if it is because the trainning set of the model are quite diffrent which may cause some noise being ignored.
Hi, I am not sure what the problem is only looking at the trace. We used bs1 for the data that does not contain structural noise and used bs3 otherwise. It might be better to try bs3 if denoising does not work correctly.
Did you use the pre-trained model we uploaded or trained the model on your own?
I used pre-trained model and I remembered that is bs1.
See comparing the denoised data in your publication, the curve is clean and smooth but for our data I directly applied SUPPORT and the baseline seeems incorrect.
Here is model parameters which were used for mouse and zebrafish data in Fig. 4: Denoising population voltage imaging data. of our paper. GDrive
Try that two parameters, and different imaging rate, spike property, noise level... between train / test data could hinder perfect denoising.
The best way is to train SUPPORT to your data. If these pretrain models also not work well and if you have any difficulties on training, let us know.
I will try it, thank you very much!
Sorry, seems the pre-trained model's parameter 'in-channels' is 61, the model parameter's 'in-channels' you shared is 16, shall I use 61 or change the layer's architecture?
Ah, sorry for the inconvenience. That was typo, 61 is correct.
You can load both parameters like this. (I checked it now!)
Yes, I already validated the model on your zebrafish data, it works well. Seems we need to train models with our data in order to get good results. Thank you very much for the help!
Hi, can I ask what's the raw data you used in Fig.4a and 4d? I downloaded the mouse cortex data and found they are all single cell's images and for zebrafish's data seems they are brain's images while Fig 4d seems from the body. Besides, can I ask how many data you used in the training? I trained some of my data however the SNR's improvement is not obvious. Thanks for your time!
Hi, the data used in Fig.4a can be downloaded from (https://zenodo.org/record/4515768#.ZC0_DHZByUk), and the data used in Fig.4d can be downloaded from (https://figshare.com/articles/dataset/Voltage_imaging_in_zebrafish_spinal_cord_with_zArchon1/14153339). We trained each model using a single video.
Thank you very much! Can I ask how you train your video, like learning rate or other parameters? I trained with a 3000020096, filming rate = 1000 movie for 100 epochs for saving the time, the loss is around 0.1 and the loss decay is really unobvious, shall I increase my training epochs or modify some other parameters?
We trained the model with the default parameters uploaded on (https://github.com/NICALab/SUPPORT/blob/main/src/utils/util.py). If you found that the improvement of SNR is not obvious, I would like to recommend increasing the "bs_size" to [3, 3] or higher.
I used bs_size [3,3] to train with my movie, seems there is not much improvement. The training loss is around 0.10. My movie is 200 96 30000, each frame looks like the image attached, is it the different space size or too much background in our movie that cause the problem? Do you think I need to modify some network architecture like change the kernel size or add some regularization?
Is the attached frame denoised, or is it raw??
What does not much improvement mean in detail?
If the noise remains in the denoised frame, we usually increase the bs_size.
p.s., Is it possible to share the data and let us try processing it? We are not sure just by looking at one frame of the data... I think your data is not much different from the data we have processed... of course, based on current information.
Thank you very much! And I've sent our data through the email. Besides, when I tried to use pre-trained model and your parameter to denoise your data, there seems some spike's signal lost. I attached a denoised trace from the chosen ROI, do you have any idea how it could happen?
red line is the denoised trace and blue line is the raw trace.
movie.tif https://drive.google.com/file/d/15VvMADVYLc-O2bnEzA_Q9GyzSip6EW77/view?usp=drive_web Hi, The attached frame is from the raw movie, the denoised movie still remains some noise. I attached the raw movie data in this email, if you can try to process it, it will be very helpful! Thank you for your reply again!
On Tue, Apr 11, 2023 at 11:53 PM Seungjae Han @.***> wrote:
Hmm,
1.
Is the attached frame denoised, or is it raw?? 2.
What does not much improvement mean in detail?
- Do you observe "noise" also in the denoised frame? (spatially)
- Or it is visually okay, but noise did not reduce in ROI trace? (temporally)
If the noise remains in the denoised frame, we usually increase the bs_size.
p.s., Is it possible to share the data and let us try processing it? We are not sure just by looking at one frame of the data... I think your data is not much different from the data we have processed... of course, based on current information.
— Reply to this email directly, view it on GitHub https://urldefense.com/v3/__https://github.com/NICALab/SUPPORT/issues/8*issuecomment-1504551702__;Iw!!IBzWLUs!VD6kT7PABVls8FXxDwG_zd8e5KrgTh0vTaJjxd5UzyEvx_n83hvmoLKGzd9tardY4GRt4ec3Q0AmuvQOm8og-i3vMMI$, or unsubscribe https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/A3J5DZ2MLIW4GGQAFWZNCYDXAYRMVANCNFSM6AAAAAAWDLJV3A__;!!IBzWLUs!VD6kT7PABVls8FXxDwG_zd8e5KrgTh0vTaJjxd5UzyEvx_n83hvmoLKGzd9tardY4GRt4ec3Q0AmuvQOm8ogng_Y-HE$ . You are receiving this because you authored the thread.Message ID: @.***>
Hi, the model we uploaded to Github was trained with another dataset, which I believe was the mouse calcium imaging dataset. We uploaded the sample model just for demo purposes. As the modality of the data is quite different from the zebrafish voltage imaging dataset, the denoising performance could not be satisfying.
We'll try SUPPORT on your data. If you observe noise in the denoised data, we typically increase the size of the blind spot. And if the dF/F0 of the spikes reduced after denoising, we proceed longer training.
Also, we are currently unavailable to download the data, seems like we need permission.
These are our Gmails, so please add us to your shared link!! After that, we'll try your data.
Minho (djaalsgh214@gmail.com) Seungjae (jay0118jay@gmail.com)
Thank you very much! I've added you to the movie link. And for what you say mouse calcium imaging dataset, do you mean the mouse cortex data? I just tried the mouse cortex movie. After denoising, I flipped the data to get the trace, seems the spikes are eliminated, do I need to do some preprocessing or postprocessing to negative-indicator data or did I do anything wrong?
Hi, we have denoised your data, and would like to share what we have done.
In short, we believe that spikes (based on our view) are preserved and the noise has been reduced after denoising.
Here is the shared folder that contains 1. denoised image, 2. Mean traces from raw and denoised data, 3. model pth file, and 4. ImageJ ROI we used for analysis. shared GDrive folder
Please take a look, and check if the results are same as yours, or better.
There are traces in png files in roi_traces
folder, so I recommend to take a look at these.
It would be great if you could point out the ROI or temporal region where the denoised data shows poor performance. Additionally, we found out the fluctuations in both raw and denoised data at the subthreshold region. Since the fluctuations are quite regular, we think that they are not the noise component, and therefore, SUPPORT did not remove them.
Below are the details. I assume most of the things are similar to your experiment.
We trained about 150 epochs (~26hours with RTX 3090Ti GPU). The model specification is as follows,
model = SUPPORT(in_channels=61, mid_channels=[64, 128, 256, 512, 1024], depth=5,\
blind_conv_channels=64, one_by_one_channels=[32, 16], last_layer_channels=[64, 32, 16], bs_size=[3, 3]).cuda()
where only the mid_channels are increased compared to the default. This simply increases the capacity of the model.
And we used the patch_size
as [61, 96, 96]
and the patch_interval
as [1, 48, 48]
since the width of your data is smaller than the default patch_size
128
.
Thank you very much for trying our data and help us figure out the problem! The result is good! Can I ask a question which may be silly, did you extract the subthreshold? I am slightly confused of what you mean the regular fluctuations on subthreshold region. Can I understand it as you think the frequency and strength of those fluctuations are consistant in subthreshold region so that you think they are not noise since the noise should be independent?
Glad to hear that the result is good!
To answer your question,
raw.csv
and denoised_bs3_150.csv
files.Thank you for your explanation! Can I ask what data and model you used for supplementary figure 9, I want to see how SUPPORT works on unseen data.
We used the data uploaded at (https://zenodo.org/record/4515768#.ZE9k83ZByUl). The name of the data was L1.03.35. We also uploaded the model we used (https://github.com/NICALab/SUPPORT/blob/main/src/GUI/trained_models/L1_generalization.pth). Please let us know if anything goes wrong.
Sorry, it's been a long time. We've tried SUPPORT many times, the results on Zebrafish and Mouse cortex data are Brilliant. But we still couldn't get the same good denoising performance on our data. We checked the movie that I shared to you, there maybe some regular fluctuation from sample itself which can not be denoised. Then we used another movie which should have less such fluctuations, but we found the similar issue. Besides, things are strange that when I used the zebrafish model to deal with our second movie, we found the denoising performance is much better than the performance using the model trained by the movie.
So i wanna ask if that means the issue is we haven't trained our model well enough? Can I ask the loss of your zebrafish and mouse model? The figure I attached is the trace comparison. Thank you!
And this is our second movie. I understand that your time is valuable, if you have some spare time, you can try our data. I'm thinking there must be some issues on my training process, so please let me know if you find something about our problem. Thank you again! https://drive.google.com/file/d/1LqoHeDXPeDmdgiM_5Z1ju9lRr7RJDyWg/view?usp=sharing
Hi, I really interested which model you used for voltage imaging data of mouse cortex layer and zebrafish. Because recently I was dealing with similar voltage imaging data, and I found some tiny irregular spikes that looks like noise on my baseline. Did you use bs1 or bs3, or other models?