LJMUAstroecology / flirpy

Python library to interact with FLIR camera cores
Other
193 stars 54 forks source link

Documentation? [How to colormap frames] #85

Open nba-ibme opened 1 year ago

nba-ibme commented 1 year ago

Any chance we could get some documentation? Especially for the Seq class, since that's what I'm trying to work with.

jveitchmichaelis commented 1 year ago

Hi,

Sorry I have very little time to work on this (apologies it's not documented), but I'm happy to answer questions if you have something specific you're struggling with. I think the code should be fairly straightforward.

The code attempts to locate "magic" bytes in the input SEQ file to find FFF frame boundaries. Then, it tries to decode the header for each FFF block which contains calibration data and the image, stored as a TIFF (for example). The most recent code attempts do to this lazily so you don't need to export the entire file to work with a specific frame.

This has proved more difficult than I expected because there is a lot of variation within SEQ file headers. EXIFtool seems to be able to handle this, and so earlier versions of Flirpy just used that.

The splitter example covers typical usage - you give it a list of files and and an output folder: https://github.com/LJMUAstroecology/flirpy/blob/master/scripts/split_seqs

but the Seq class is meant to be used as an iterator.

Thanks

nba-ibme commented 1 year ago

Thank you for the quick response!

Specifically, I've been trying to use flirpy's seq_split utility to get the frames out of a .seq file, but the problem is that the images come out in grayscale, rather than the full color image you'd see if you watched the .seq video on something like FLIR Thermal Studio. I was hoping to use the Seqclass defined in seq.py under the io/ folder to get those full-color images, but that's where I met with a lack of documentation. Do you have any idea how to solve the full-color problem? Again, I want the full-color images you'd see if you watched the original .seq video, as if you would have taken screenshots of the video playing.

jveitchmichaelis commented 1 year ago

Oh ok, you just need to colormap the image. This isn't something specific to Flirpy:

https://learnopencv.com/applycolormap-for-pseudocoloring-in-opencv-c-python/

Flirpy will give you a radiometric image, but each pixel has a single intensity which you need to map to a colour. So something like:

def make_colour_image(image):
    image = image.astype(float)
    range = image.max() - image.min()
    image_scaled = (((image - image.min())/range) * 255).astype(np.uint8)
    return cv2.applyColorMap(image_scaled, cv2.COLORMAP_INFERNO)

This will create an image similar to "auto gain". We subtract the minimum value to get max contrast and we convert the range of values to 8-bit (0-255). Then use applyColorMap which will give you a 3-channel image which you can save as normal.

The reason that flirpy doesn't do this automatically is that users want different visualisation options, so we just dump the "raw" data. However it would probably be a good feature to add as people expect thermal images to be coloured.

nba-ibme commented 1 year ago

Thank you!

image here would correspond to im.final, where im is an ImagerFile object, correct?

jveitchmichaelis commented 1 year ago

The input is a 2D numpy array (HxW) and the output is a 3D numpy array (HxWxC) where C=3 - but it sounds like you're using Perl?

jveitchmichaelis commented 1 year ago

Proposed enhancement: