EtalumaSupport / LumaViewPro

MIT License
6 stars 6 forks source link

RGB and bit depth file format #352

Open EtalumaSupport opened 7 months ago

EtalumaSupport commented 7 months ago

Two major issues arise when LumaViewPro creates images, the first being the generation of arbitrary 3-channels (regardless of actual channels/lasers used) and the second being that all images are forcibly converted to a pseudo-16-bit image.

Calling for the generation of an arbitrary array “img” with three channels (np.zeroes((x dim, y dim, channels = 3))) will always result in downstream issues, even if the following if/elif statements are attempting to fill in appropriate channels with some signal. The most widely used piece of software for biomedical image analysis is probably FIJI, and FIJI will interpret the software output image as an RGB image (even though our camera is monochromatic). Further, if we use a single laser, we still get an RGB image because np.zeroes fills our array “img” with zeroes in all channels, resulting in black images.

There is also the matter of conventions. OpenCV typically reports color in the “BGR” (blue, green, red) space but image interpreters (FIJI, Imaris, etc.) typically operate in the “RGB” (red, green, blue) space. This can lead to some confusion down the line since the blue and red channels will be swapped.

My suggestion is to avoid using the current structure to create a three-channel image and instead generate three individual images (on a per-laser basis) that are then appended together in the OME.TIF format as an image stack. By doing so, you avoid erroneous interpretation as an RGB image and properly see each laser as a monochromatic signal.

It’s important to note that this code is a gross simplification and misses a lot of nuance present in the LumaViewPro software. It also doesn’t save as OME.TIF and has some errors, but the general idea should work. Then, the titles of each image can have the laser wavelength appended to them so folks can quickly ID which image in a stack represents which laser.

One other issue that pops up is 16-bit images: As far as I am aware, numpy (np) does not support 12-bit formatting, and instead will always save any input image (in this case, the variable “array”) in the unsigned 16-bit integer format (unless specified otherwise using options like 8-bit or 64-bit). This appears to be a general image processing software phenomenon, as most software I’ve used doesn’t bother with explicitly displaying a 12-bit image with the appropriate bins (0 – 4095). But, if you were to check to see what range of values that your “16-bit image from a 12-bit camera” had, it would only be from 0 – 4095 instead of 0 – 65535.

I noticed that you have a quick function that takes an input image and scales it to 16-bit, assuming a user didn’t already make it 8-bit.

All this does is scale a 12-bit image to the 16-bit space, which normally would be an ok thing to do if we were working with analysis software that expects 16-bit images. But since numpy already forces it to be in the 16-bit format, I think it may be best to remove any scaling and simply report the native 12-bit images. Unfortunately, this means that the images will still be seen as “16-bit” by most software, but users would be able to see that a 12-bit binning range is used upon review.

(issue description taken from email correspondence with Danial Ahmad, U. of Rochester)