shiveshc / NIDDL

Deep denoising pushes the limit of functional data acquisition by recovering high SNR calcium traces from low SNR videos acquired using low laser power or smaller exposure time. Thus deep denoising enables faster and longer volumetric recordings.
GNU General Public License v2.0
10 stars 3 forks source link

Issue with denoising in napari #7

Open pramit57 opened 4 months ago

pramit57 commented 4 months ago

Hello, I am trying to use the program to denoise my calcium signals. I have the napari window open, and I select my tiff file(its a timeseries) in the noisy_img file. I am not sure what I have to exactly put into the gt_img, and the run path. I tried to put in a file(another tiff file) with high SNR in the gt_img, and I select the directory as the run path, and I run denoise. Doing this, or any variation of this, results in an error:

error.txt

If I just put in the tiff file in the noisy_img path and run denoise, I get this error: error2.txt

Could someone instruct me on what I might be doing wrong, or to troubleshoot this error?

shiveshc commented 3 months ago

Hi. Thanks for trying out NIDDL.

To answer your questions -

  1. You do not have to put anything in the 'gt_img' field. We just provide that option so that if users have clean i.e. 'gt_img' corresponding to their 'noisy_img', they can compare the denoised output with 'gt_img' to see how good it has worked.

  2. In the 'run_path' field, you have to put the path of the directory where your trained model is saved. E.g. a trained model directory will look something like this 'https://github.com/shiveshc/NIDDL/tree/pytorch/test_runs/run_hourglass_wres_l1_mp1_m2D_d1_1_0'. It will have model_weights.pt and model_config.pickle files in it. These files specify the weights of the trained model and configuration of the model used for training on your data.

  3. If you have not trained a model on your data yet, I would advise to step through this notebook 'https://github.com/shiveshc/NIDDL/blob/pytorch/example.ipynb' to quickly train a model on atleast a few images. Subsequently you can train on all your data.

Let me know if this works for you.

pramit57 commented 3 months ago

Hello, Thank you for answering, I tried with setting the run path to the test_runs directory (in the hourglass_wres folder containing both the files), but I still got a runtime error error.txt

I don't have a trained model yet for this calcium data, but I was hoping to just test it once with the already trained models

shiveshc commented 3 months ago
  1. Pretrained model will most probably not work too well on your data because it was trained with synthetic data, just as an example. So I would highly suggest to use https://github.com/shiveshc/NIDDL/blob/pytorch/example.ipynb to train a model for your data.
  2. From the error it seems that dimensions of images are smaller than the what the pretrained model was trained on hence after a couple of max pooling layer the tensor dimensions are too small. Could you please send me the shape of bea-1_plane2corr.tiff file. So I can try to debug.
pramit57 commented 3 months ago

Hello, 1- I tried to run the example.ipynb on the example dataset (the one in "synthetic_data/signal 5" directory), and I got two errors -

On step 9, when I run c, h, w = train_X.shape[1], train_X.shape[2], train_X.shape[3] summary(model, (c, h, w))

I get an error:

/home/ice.mpg.de/pbandyopadhyay/anaconda3/envs/niddl-env/lib/python3.9/site-packages/torchsummary/torchsummary.py:93: RuntimeWarning: overflow encountered in scalar add total_output += np.prod(summary[layer]["output_shape"])

And when I start training, I get this error:

AttributeError Traceback (most recent call last) Cell In[10], line 1 ----> 1 trainer(model_config)

File ~/NIDDL/train.py:275, in trainer(model_config) 272 print(f'saved model weights at {save_model_path}') 274 # save some random prediction examples --> 275 save_example_denoising_on_random_test_data( 276 model_config, 277 test_X, 278 test_Y, 279 model, 280 device, 281 results_dir 282 ) 284 # calculate accuracy on test data and save results 285 calculate_metrics( 286 model_config, 287 test_X, (...) 292 results_dir 293 )

File ~/NIDDL/train.py:165, in save_example_denoising_on_random_test_data(model_config, test_X, test_Y, model, device, results_dir) 163 save_name_pred = os.path.join(resultsdir, f'pred{temp_idx + 1}.png') 164 if model_config.mode == '2D': --> 165 cv2.imwrite(save_name_X, batch_x[0, 0, :, :].astype(np.uint16)) # this is the middle zplane corresponding to gt zplane 166 cv2.imwrite(save_name_Y, batch_y[0, 0, :, :].astype(np.uint16)) 167 cv2.imwrite(save_name_pred, pred[0, 0, :, :].astype(np.uint16))

AttributeError: module 'cv2' has no attribute 'imwrite'

EDIT: The second issue is fixed once I reinstalled openCV (its version is now 4.10.0.84). I was able to complete the example notebook for the example dataset, I will try it on my own dataset now

2- So bea-1_plane2corr.tiff is a 2D file with a resolution of 256x256 containing a 60 frame timelapse, I have attached the file here. I am not sure if this answers your questions, so please let me know if you need more details

bea-1_plane2corr.zip

pramit57 commented 3 months ago

I am not sure if this will help, but I checked the version of openCV and its as the one in environment.yml (copy pasted from conda list) opencv-python 4.8.0.74 pypi_0 pypi

EDIT: I remove opencv and installed it again, and this time the version is 4.10.0.84 - it worked with this