CRBS / cdeep3m

Please go to https://github.com/CRBS/cdeep3m2 for most recent version
Other
58 stars 10 forks source link

2D data #73

Closed albert597 closed 4 years ago

albert597 commented 5 years ago

I have been trying to use this github package on AWS with my 2D data. inside training folder I have the images and labels of each 2D image and their respective label. Each 2D images have different dimensions.

When I do: PreprocessTrainingData.m ~/training/images/ ~/training/labels/ ~/augmentedtraining/

I get the error: error: imageimporter: A(I,J,...) = X: dimensions mismatch

which I am assuming is because the preprocessing is assuming it is 3D data and trying to stitch together each image in the folder which causes an error because the 2D images have different dimensions.

How do I make the PreprocessTrainingData know this is 2D data and not 3D data? I have looked through the source code but could not find any flag.

Thank you for your time.

MatthewBM commented 5 years ago

Hi @AlbertPun

There's a couple directions we can go in to fix this. The easiest would be if you padded your data (adding blank pixels to the edges) so that the images would all have the same dimension.

You can do this in Fiji with adjust> Canvas Size, here's an example thread about that: http://imagej.1557.x6.nabble.com/Padding-in-ImageJ-td5000622.html

Than make sure you're using --models 1fm in your runtraining.sh and runprediction.sh

Let me know how it goes.

albert597 commented 5 years ago

Thank you for your answer! I am reconsidering how to use your cdeep3m network to train on my data and was wondering if you had any suggestions.

My data consists of ~50 separate volume examples where there are xy slices in each volume labeled.

The data is labeled this way because we were originally going to use the 3D unet https://arxiv.org/abs/1606.06650, but we wanted to try out your network.

I was going to just take out each labeled slice from the 50 volumes and treat it as 2D data, but I would like to use the benefits of 3D training of cdeep3m.

If I understand correctly, the 3D networks take in 5 or 3 slices and will output the labels of the middle slice. Instead of having to label every slice, is there anyway I can give the network multiple different examples which consist of 5 slices but with only the middle slice labeled for training data?

Please let me know if you need anymore clarification.

Thanks!

MatthewBM commented 5 years ago

Hi @AlbertPun

To leverage any 3D neural network you must have 3D training data, so labels across consecutive xy slices in a 3d volume, otherwise there's no way to train the network to increase accuracy by using 3D information. I'd recommend annotating at least one volume with 3D labels (something like 1000x1000 pixel crop and 12 slices).