Right now, setting the value of subtract_background in the config file permits a user to specify a timepoint as background, e.g. a blank imaging timepoint, to improve pixel values for fits. However, this can increase pixel value variance. To prevent this, we should perhaps do something like take the mean or median pixel value in the background images, and just subtract that. This would also greatly simplify what we store (i.e., we'll still need to extract the background pixel volumes to take a measure of center, but then we won't need to store that NPZ stack but rather just the per-ROI measure of pixel value center (mean or median, e.g.) for background.
Right now, setting the value of
subtract_background
in the config file permits a user to specify a timepoint as background, e.g. a blank imaging timepoint, to improve pixel values for fits. However, this can increase pixel value variance. To prevent this, we should perhaps do something like take the mean or median pixel value in the background images, and just subtract that. This would also greatly simplify what we store (i.e., we'll still need to extract the background pixel volumes to take a measure of center, but then we won't need to store that NPZ stack but rather just the per-ROI measure of pixel value center (mean or median, e.g.) for background.@TLSteinacker
follow-up on #324 #323 #322