lmb-freiburg / Unet-Segmentation

The U-Net Segmentation plugin for Fiji (ImageJ)
https://lmb.informatik.uni-freiburg.de/resources/opensource/unet
GNU General Public License v3.0
87 stars 25 forks source link

Tiling makes tens of thousands of images #77

Closed njhanne closed 3 years ago

njhanne commented 3 years ago

Hello Dr Falk.

I am trying to finetune the 3D fluorescent microsphere model (v1) with a couple labelled image stacks to pick out nuclei (1 train, 1 validation). I've labelled the even sections and 'ignored' the odd. My images are z-stacks at a resolution of 0.086 x 0.086 x 1.18 um, the images are 2048 x 2048 x 16, but the training images are 512 x 512 x 16.

Unfortunately the model requires a bit over 6GB vram, so I'm planning to setup an EC2 for this job. However, I think I will need ~600GB of storage to run the model? When I try to run it on my personal computer it says it will need 65,536 tiled images, and then it just starts making them until I get HDD errors! Can I just lie to the finetune and say the voxels are 1 x 1 x 1? Or have I messed something up?

Here's the log from one that I cancelled as soon as it started saving tiled images:

$ sftp "/home/nicholas/Desktop/test_images/models/3d_cell_net_microspores-fluorescence_v1.modeldef.h5" "nicholas@localhost:22:/home/nicholas/unet-a6b6001a-7c1c-4a82-9823-d278e952a18c.modeldef.h5" nicholas@localhost$ caffe_unet check_model_and_weights_h5 -model "unet-a6b6001a-7c1c-4a82-9823-d278e952a18c.modeldef.h5" -weights "/home/nicholas/Desktop/test_images/models/3d_cell_net_microspores-fluorescence_v1.caffemodel.h5" -n_channels 1 -gpu 0 Adding class nucleus t = 1: scale = 0.004048583, offset = -2.0 Caffe blobs saved to '/tmp/unet-a6b6001a-7c1c-4a82-9823-d278e952a18c948357448378015715.h5' $ sftp "/tmp/unet-a6b6001a-7c1c-4a82-9823-d278e952a18c948357448378015715.h5" "nicholas@localhost:22:/home/nicholas/unet-a6b6001a-7c1c-4a82-9823-d278e952a18c_train_0.h5" t = 1: scale = 0.0039525693, offset = -2.0 tiling = 4x128x128 nTiles = 65536 nicholas@localhost $ rm "/home/nicholas/unet-a6b6001a-7c1c-4a82-9823-d278e952a18c.modeldef.h5" nicholas@localhost $ rm "/home/nicholas/unet-a6b6001a-7c1c-4a82-9823-d278e952a18c_train_0.h5" U-Net job aborted

Thank you for any help! I'm hoping this tool will work a lot better than my weak morphological filtering script!

ThorstenFalk commented 3 years ago

Hi,

no worries, if you want to finetune, you can of course change the element size to something useful for your case,, it's not "lying", it's just a domain shift. But your raw element size puzzles me. Are you sure about 0.086µm in-plane vs. approx. 1µm in z-dimension? This is a quite dramatic difference. The network was trained for an aspect ratio of 1:2, not 1:15. Anyways, if your element size is correct, your images should shrink by an approximate factor of 8 when rescaling to the model element size of 1x0.5x0.5. What tile size do you use? Try to increase (evenly for all dimensions) until you hit the memory limit of your GPU.

njhanne commented 3 years ago

Thanks for the feedback! Yes, the ratio is correct, I do this because the confocal microscope takes a very long time to image (~30s per channel per z per xy tile) and time on the machine is expensive. Unfortunately this poor z resolution makes it hard for watershed segmentation, which is why I turned to U-Net.

Using your advice I have got it down to a reasonable 441 tiles by telling it the voxels are ~0.5 x ~0.5 x ~7.3 um. If I don't get the ratio just right FIJI throws matrix out of bounds errors, but with a calculator it works. I have it at the lowest tile size (188 x 188 x 92) which puts it at 6.2 GB vram needed, just over the limit for my computer. I will do the training on AWS. For segmentation I can do 116 x 116 x 20, 5.4 GB vram should be able to run on my PC.

njhanne commented 3 years ago

Screenshot from 2021-03-17 08-41-45

Well I think it is working! With one test image and one validation image, 3000 iterations w/ validation every 50 I think it improved quite a bit! I just went ahead and used an AWS machine with 16GB of vram to keep the original image voxel size. I think it only cost a 2-3 USD to run for ~6 hours needed. Your v1 model is on the left and the finetuned is on the right. I circled two nuclei on both to show that the fine-tuned model is doing a good job separating nuclei that are close together! I will try it out and see if it will work for my downstream analyses.

It seemed every time I tried to run the model something would prevent it from working:

Danke schön for your help!