ML-SIM: universal reconstruction of structured illumination microscopy images using transfer learning
Charles N. Christensen1,2,*, Edward N. Ward1, Meng Lu1, Pietro Lio2, Clemens F. Kaminski 1University of Cambridge, Department of Chemical Engineering and Biotechnology, Laser Analytics Group 2University of Cambridge, Department of Computer Science and Technology, Artificial Intelligence Group *Author of this repository:
Announcing a new app in the ML-SIM project: the integration of an interactive Streamlit application for exploring training data with the image formation model that forms the foundation of the training data in ML-SIM. This enables the exploration of the optical parameters relevant to SIM and can provide insight into achievable resolution improvement based on simulated frequency support (OTF area). The app makes it easier to fine-tune and choose parameters before training a custom ML-SIM model.
Access the Streamlit application here:
To play with the Streamlit application locally, execute the following command:
streamlit run MLSIM_datagen/streamlit_frequency_support.py
Below is a preview of the interactive Streamlit application:
See https://ML-SIM.github.io for examples and test images. A live demo is available at:
The model used in this demo assumes that the inputs are 9 frame SIM stacks of 512x512 resolution; i.e. 3 orientations and 3 phase shifts. It will work for other dimensions, but is unlikely to be good.
ML-SIM: universal reconstruction of structured illumination microscopy images using transfer learning https://doi.org/10.1364/BOE.414680
ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images https://arxiv.org/abs/2003.11064
A demonstration of the graphical user interface that has been developed for ML-SIM is shown below. It relies on an engine written in Python using various image libraries for parsing the many formats used in scientific imaging and Pytorch for deep learning purposes. The functionality of applying ML-SIM in this app is fit for end-users as it stands. More features are planned such as generating new SIM datasets and training new models on those datasets within the app. Over time this software will also be extended with other plugins that rely on deep learning for other image processing tasks. Read more.
Code files, Jupyter notebooks and source code for a graphical desktop app have been added. Further documentation, example reconstruction outputs, pre-trained models and snippets to train and evaluate the models reported in the publications are to be added shortly.
ML-SIM uses synthetic training data that is simulated based on an physically accurate implementation of the SIM (structured illumination microscopy) imaging process. A deep neural network is trained to solve the inverse problem and with transfer learning it is possible to make the trained model work well on experimental data from the lab.
For more detailed package versions, please see the Pipfile in Graphical-App.
python MLSIM_pipeline.py --sourceimages_path SRC_IMAGES --out ~/model_out \
--ntrain 20 --ntest 20 --root auto --task simin_gtout --imageSize 512 --nrep 1 \
--datagen_workers 4 --model rcan --nch_in 9 --nch_out 1 --ntrain 780 --ntest 20 \
--scale 1 --task simin_gtout --n_resgroups 2 --n_resblocks 5 --n_feats 48 --lr 0.0001 \
--nepoch 50 --scheduler 10,0.5 --norm minmax --dataset fouriersim --workers 0 \
--batchSize 5 --saveinterval 20 --plotinterval 10 --nplot 5 --Nangle 3 --Nshift 3
where SRC_IMAGES
are diverse images from ImageNet, DIV2K (used in publication) or similar image sets. To see all options run python MLSIM_pipeline.py -h
or see the source code.
3_Evaluate.ipynb
for how to run this pre-trained model.An easy to install and use desktop app for Windows 10, macOS and Linux is available as an Electron app. Instructions and all source code to run the app is given in the sub folder Graphical-App. The program allows one to batch process a set of directories including subdirectories that contain TIFF stacks, and customise and select the model that is used for reconstruction. See screenshot below.