A PyTorch-based library for working with 3D and 2D convolutional neural networks, with focus on semantic segmentation of volumetric biomedical image data
MIT License
161
stars
27
forks
source link
Improve PatchCreator interface, add basic 2D training support #22
There are a few breaking changes in this PR. Please refer to the updated scripts in the examples directory and the updated docs for fixing old custom training scripts if needed.
Highlights:
Trainer (a.k.a. StoppableTrainer) and PatchCreator are now cleanly separated and can both be used independently. Therefore, using Trainer with a custom Dataset or using PatchCreator with a custom training loop should be straightforward now.
PatchCreator's interface is now much closer to torchvision.datasets, which are the PyTorch standard for torch.utils.data.Dataset implementations.
Trainer now fully supports both 3D and 2D segmentation workflows. Data shapes don't have to be specified, they are handled automatically.
Tensorboard plotting in Trainer now works transparently for 3D and 2D images.
A toy 2D segmentation dataset (SimpleNeuroData2d) that treats the depth dimension D of a HDF5 array as the index and delivers (H, W) slices. It doesn't yet support any augmentations and only contains 150 images, so don't expect impressive results from it. However, it can be used as a proof of concept for testing 2D segmentation workflows. SimpleNeuroData2d doesn't require any extra files except the standard neuro_data_cdhw dataset being located in ~/neuro_data_cdhw, where all other examples expect it as well (speaking about defaults. The path is configurable of course). Augmentations are planned to be supported soon via the standard torchvision.transforms interface.
A simple demo training script that shows a 2D segmentation workflow (examples/simple2d.py). It uses the aforementioned toy dataset and a really simple Convnet architecture that is just there as a placeholder until serious 2D network architectures are available.
There are a few breaking changes in this PR. Please refer to the updated scripts in the
examples
directory and the updated docs for fixing old custom training scripts if needed.Highlights:
Trainer
(a.k.a.StoppableTrainer
) andPatchCreator
are now cleanly separated and can both be used independently. Therefore, usingTrainer
with a customDataset
or usingPatchCreator
with a custom training loop should be straightforward now.PatchCreator
's interface is now much closer totorchvision.datasets
, which are the PyTorch standard fortorch.utils.data.Dataset
implementations.Trainer
now fully supports both 3D and 2D segmentation workflows. Data shapes don't have to be specified, they are handled automatically.Trainer
now works transparently for 3D and 2D images.SimpleNeuroData2d
) that treats the depth dimension D of a HDF5 array as the index and delivers (H, W) slices. It doesn't yet support any augmentations and only contains 150 images, so don't expect impressive results from it. However, it can be used as a proof of concept for testing 2D segmentation workflows.SimpleNeuroData2d
doesn't require any extra files except the standardneuro_data_cdhw
dataset being located in~/neuro_data_cdhw
, where all other examples expect it as well (speaking about defaults. The path is configurable of course). Augmentations are planned to be supported soon via the standardtorchvision.transforms
interface.examples/simple2d.py
). It uses the aforementioned toy dataset and a really simple Convnet architecture that is just there as a placeholder until serious 2D network architectures are available.Partially addressing #5.