CDeep3M provides a plug-and-play cloud based deep learning solution for image segmentation of light, electron and X-ray microscopy.
Click launch button to spin up the latest release of CDeep3M on the cloud (~20 minute spin up time): (Oregon region)
NOTE: Running will result in EC2 charges (0.9-3$ per hour runtime)
Just opened your AWS account? Request access to GPU nodes before starting: follow instructions here
Follow the instructions on how to link your SSH key. You can directly create the SSH key on AWS.
Once approved, launch cloudformation stack using the launch button. Click here for detailed instructions on launching CDeep3M. NOTE: Running CloudFormation stack requires AWS account and will result in EC2 charges (0.9-3$ per hour runtime)
Click here for instruction how to access your cloudstack
Click here for instructions of a CDeep3M demorun 1
Running segmentation with a pretrained model (Runtime ~5min)
Click here for instructions of a CDeep3M demorun 2
Running short training and segmentation using data already loaded on the cloud (Runtime ~1h)
How to train your own model and segment with CDeep3M
This will guide you step-by-step through training a network and the prediction on your own data.
Done with your segmentation? Don't forget to delete your Cloud Stack
Hyperparameters can be adjusted by passing flags to runtraining.sh
If you use CDeep3M for your research please cite:
@article{,
title={CDeep3M - Plug-and-Play cloud based deep learning for image segmentation},
author={Haberl M., Churas C., Tindall L., Boassa D., Phan S., Bushong E.A., Madany M., Akay R., Deerinck T.J., Peltier S., and Ellisman M.H.},
journal={Nature Methods},
year={2018}
DOI = {10.1038/s41592-018-0106-z}
}
Further reading:
Please email to cdeep3m@gmail.com for additional questions.
Thanks to CrispyCrafter and Jurgen for making a Docker version of CDeep3M. If you want to run CDeep3M locally this should be the quickest way:
NOTE: Getting the following software and configuration setup is not trivial. To try out CDeep3M it is suggested one try CDeep3M in the cloud, desribed above, which eliminates all the following steps.
Nvidia K40 GPU or better (needs 12gb+ ram) with CUDA 7.5 or higher
Special forked version of caffe found here: https://github.com/coleslaw481/caffe_nd_sense_segmentation
Linux OS, preferably Ubuntu with Nvidia drivers installed and working correctly
Octave 4.0+ with image package (ie under ubuntu: sudo apt install octave octave-image octave-pkg-dev)
hdf5oct: https://github.com/stegro/hdf5oct/archive/b047e6e611e874b02740e7465f5d139e74f9765f.zip
bats (for testing): https://github.com/bats-core/bats-core/archive/v0.4.0.tar.gz
Python 2.7 with cv2 (OpenCV), joblib and requests
wget https://github.com/CRBS/cdeep3m/archive/v1.6.3rc3.tar.gz
tar -zxf v1.6.3rc3.tar.gz
cd cdeep3m-1.6.3rc3
export PATH=$PATH:`pwd`
runtraining.sh --version
For contents of model/ see model/LICENSE file for license
CDeep3M was developped based off a convolutional neural network implemented in DeepEM3D
Support from NIH grants 5P41GM103412-29 (NCMIR), 5p41GM103426-24 (NBCR), 5R01GM082949-10 (CIL)
The DIVE lab for making DeepEM3D publicly available.
O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.
This research benefitted from the use of credits from the National Institutes of Health (NIH) Cloud Credits Model Pilot, a component of the NIH Big Data to Knowledge (BD2K) program.