Closed AEttinger closed 6 years ago
Hello Andreas -
I cannot give you a definite answer regarding timelines (other than people are working hard on it) but would jump in for the training data as you seem to be interested in the denoising application.
A mantra I like to go by: "A network is only as good as its training". How much training data you need depends on a bunch of factors but it boils down to:
How consistent are the structures (spacing, size, intensity)? Denoising nicely separated nuclei of a single layer of cell cultures is a different story than denoising nuclei in a developing embryo for example. The first likely just needs a couple of hundred training pairs while the latter goes into the (ten-) thousands of slices.
In general it is worth to put some effort (e.g. time) into training data acquisition. To give you an idea of how many stacks you need to acquire for denoising signals from complex structures see section 2.3 of the supplementary material. For planaria we imaged 6 biological samples in their entity.
Critical for the denoising is the registered acquisition of noisy/ground truth image pairs. E.g. image one optical slice in all conditions at a time and then move on to the next optical section. It is also wise to acquire multiple noisy conditions as it is sometimes hard to predict what the network can learn and at which noise levels hallucination starts (see Supplemental Figures 1 and 2).
I hope that helps for now.
Cheers,
Tobias
Thanks for the reply! I'll read the supplement more carefully ;)
De-noising would indeed be a priority for me, to reduce our exposure times/laser powers and have happier cells.
Your comment is really valuable, as I will have to put some thought into how to get a good training set.
Best, Andreas
Hi,
thanks for providing this software already now, the results in the pre-print look really impressive. I can't wait to try it out on our own live imaging data. Do you have a time line for implementing/releasing training of own models in Fiji or as Python code? What kind of training data will be needed? I assume images/z-stacks taken with short exposures/long exposures; but how big data set would be needed?
Thanks, Andreas