lls_dd
l
attice l
ights
heet d
eskew and d
econvolution with GPU-accelerated PythonThis project provides GPU-accelerated python code for post-processing (deconvolution, deskew, rotate to coverslip, MIP projections) of image stacks acquired on a lattice light sheet microscope. If no supported GPU is present, all functions can also be run on a CPU.
The repository encompasses:
lls_dd
command line tool for batch processing of experimentslls_dd
python package which provides the functions on which the command line tool is built.lls_dd
command line tool usagells_dd
installed (see Installation section below)folder_structure.md
.lls_dd
expects a configuration file that contains some fixed settings. The default location for this configuration file is $HOME/.lls_dd/fixed_settings.json
. When lls_dd
does not find this file it will try and create it. Make sure you edit the file to reflect the configuration of your microscope (mainly the light sheet angle and the magnification).Run lls_dd --help
to get an overview of the command line arguments and the main processing commands:
λ lls_dd --help
Usage: lls_dd [OPTIONS] EXP_FOLDER COMMAND [ARGS]...
lls_dd: lattice lightsheet deskew and deconvolution utility
Options:
--home TEXT
--debug / --no-debug
-f, --fixed_settings TEXT .json file with fixed settings
--help Show this message and exit.
Commands:
process Processes an experiment folder or individual stacks
therein.
psfs list psfs in experiment folder
stacks list stacks in experiment folder
EXP_FOLDER
is the experiment folder that you wich to process.process
command takes additional [ARGS]
You can see the arguments for process
by passing --help
after the experiment folder and process command:
λ lls_dd ..\..\..\Data\Experiment_testing_stacks\ process --help
Usage: lls_dd process [OPTIONS] [OUT_FOLDER]
experiment folder to process (required) output folder (optional)
Otherwise same as input
Options:
-M, --MIP calculate maximum intensity projections
--rot rotate deskewed data to coverslip
coordinates and save
--deskew save deskewed data
-b, --backend TEXT deconvolution backend, either "flowdec"
or "gputools"
-i, --iterations INTEGER if >0, perform deconvolution this number
of Richardson-Lucy iterations
-r, --decon-rot if deconvolution was chosen, rotate
deconvolved and deskewed data to
coverslip coordinates and save.
-s, --decon-deskew if deconvolution was chosen, rotate
deconvolved and deskewed data to
coverslip coordinates and save.
-n, --number TEXT stack number to process. if not provided,
all stacks are processed
--mstyle [montage|multi] MIP output style
--skip_existing if this opting is given, files for which
the output already exists will not be
processed
--lzw INTEGER lossless compression level for tiff
(0-9). 0 is no compression
--help Show this message and exit.
The juyter notebook requires a sample image volume and PSF that is too large for Github. Download them here (dropbox links):
The following notebooks illustrate the basic algorithms used and provide examples for batch processing.
conda
+ pip
Install anaconda or miniconda.
Create a conda
environment llsdd
:
conda env create -f environment_{...}_.yml
where {environment_{...}_.yml}
stands for one of the two provided environment files. Choose
environment-no-CUDA.yml
if you do not have a CUDA-compatible graphics card. Choose
evironment-CUDA.yml
if you do have a CUDA compatible gpu. The latter will install tensorflow-gpu
which will be essential for fast deconvolution.
Activate the new environment with conda activate llsdd
.
Download and unzip or git clone
this repository and install it:
git clone https://github.com/VolkerH/Lattice_Lightsheet_Deskew_Deconv.git
cd Lattice_Lightsheet_Deskew_Deconv
python setup.py install
You should now be able to use the lls_dd
command line utility.
If the installation fails for pyopencl
, see
the paragraph on pyopencl
in section Troubleshooting.
The post-processing of lattice light sheet stacks requires a considerable amount of GPU memory. In particular, the deconvolution is very memory hungry. See the discussion in the issues here and here. If you run out of GPU memory there are several troubleshooting steps you can take:
lls_dd
.--decon-deskew
only (the --decon-rot
option requires more GPU memory for the affine transform).pyopencl
installation and errorsFor GPU accelerated deskew/rotate you need install OpenCL drivers for your GPU.
Getting pyopencl
(one of the required python dependencies) to work can be tricky. When installing from conda-forge
it seems to be somewhat of a lottery whether the installed package works. On some windows machines where I did not manage to obtain a working pyopencl
from conda-forge
I found that uninstalling the conda-installed pyopencl
and then pip
-installing a binary wheel from Chris Gohlke into my conda environment did the trick.
This project was started in late 2018 with the intention to create an open-source solution because the existing GPU-accelarated implementation at the time was only available in binary form after signing a research license agreement with Janelia. Meanwhile, the Janelia code has been open-sourced and put on Github.
Many of the features of lls_dd
overlap with Talley Lambert's LLSpy
which also handles batch processing of experiments and has additional functions such as channel registration and sCMOS sensor corrections that are not (yet) present in lls_dd
.
Currently lls_dd
is mainly leveraging on two libraries that handle the heavy lifting:
flowdec
by Eric Czech . This is the default library used for deconvolution, based on tensorflow.gputools
by Martin Weigert for affine transformations and optionally also for OpenCL-based deconvolution (experimental).Sample image used on this page courtesy of Felix Kraus and David Potter (Monash University).
This library was written by Volker Hilsenstein at Monash Micro Imaging.
The code in this repository is distributed under a BSD-3 license, with the exceptions of parts from other projects (which will have their respective licenses reproduced in the source filegits.)