micasense / imageprocessing

MicaSense RedEdge and Altum image processing tutorials
https://www.micasense.com
MIT License
247 stars 148 forks source link

Panel.panel_corners() returns different coordinate order than written in XMP:ReflectArea tag #111

Closed and-viceversa closed 4 years ago

and-viceversa commented 4 years ago

Say I take a capture of the reflectance panel with an Altum.

I can inspect auto panel detection like

img1 = micasense.image.Image(path1)
print(img1.panel_region)
img2 = micasense.image.Image(path2)
print(img2.panel_region)
img3 = micasense.image.Image(path3)
print(img3.panel_region)
img4 = micasense.image.Image(path4)
print(img4.panel_region)
img5 = micasense.image.Image(path5)
print(img5.panel_region)

... which returns ...

[(949, 799), (849, 795), (845, 897), (945, 901)]
None
[(1077, 736), (977, 732), (973, 834), (1073, 838)]
[(961, 699), (859, 695), (855, 797), (957, 801)]
[(1013, 763), (911, 759), (909, 861), (1011, 865)]

It looks like my band 2 image didn't auto detect the panel. This isn't a problem, because I can use Panel.panel_corners() like

p1 = Panel.Panel(img1)
print(p1.panel_corners())
p2 = Panel.Panel(img2)
print(p2.panel_corners())
p3 = Panel.Panel(img3)
print(p3.panel_corners())
p4 = Panel.Panel(img4)
print(p4.panel_corners())
p5 = Panel.Panel(img5)
print(p5.panel_corners())

... which returns ...

[[949 799]
 [849 795]
 [845 897]
 [945 901]]
[[1093  788]        <----- Different coordinate order.
 [1087  932]
 [ 923  931]
 [ 931  787]]
[[1077  736]
 [ 977  732]
 [ 973  834]
 [1073  838]]
[[961 699]
 [859 695]
 [855 797]
 [957 801]]
[[1013  763]
 [ 911  759]
 [ 909  861]
 [1011  865]]

The auto detect returns coordinates like [ (UpperRight), (UpperLeft), (LowerLeft), (LowerRight) ]

Notice that the panel detection coordinated for image 2 (non-auto detect) are returned in a different order. [ [UpperRight], [LowerRight], [LowerLeft], [UpperLeft] ]

I'm still working through the implications of this, but could this be a problem downstream?

It could be a problem if this is happening silently and returning out-of-order geometries.

It could be a problem if this is happening silenty and the panel capture itself is not good because panel auto-detect isn't happening.

Edit: Got the auto-detect coordinate order wrong.

poynting commented 4 years ago

Without digging much, I'm not sure if it's a problem downstream. I know there are some cases where we swap the order due to numpy/openCV using different conventions.

I usually use the capture class, which includes some helper functions like

Capure.detect_panels() Capture.panels_in_all_expected_images()

etc. See https://github.com/micasense/imageprocessing/blob/master/micasense/capture.py

And possibly try to run your images through the Captures notebook to see what you get: https://micasense.github.io/imageprocessing/Captures.html

and-viceversa commented 4 years ago

Here's a quick function Panel.panel_coords_in_order(self) to return coordinates in the same manner as Image.panel_region.

There is probably a better way to do this.

    def panel_coords_in_order(self):
        """
        The Panel.panel_corners() call seems to return corners in an inconsistent order. Helper method to predictably
        sort corner order.
        :return:
        """
        pc = self.panel_corners()
        pc = sorted(pc, key=lambda x: x[0])

        # get the coordinates on the "left" and "right" side of the bounding box
        left_coords = pc[:3]
        right_coords = pc[2:]

        # sort y values ascending for correct order
        left_coords = sorted(left_coords, key=lambda y: y[0])
        right_coords = sorted(right_coords, key=lambda y: y[0])

        return [tuple(right_coords[1]), tuple(left_coords[1]), tuple(left_coords[0]), tuple(right_coords[0])]

... which returns ...

[(949, 799), (849, 795), (845, 897), (945, 901)]
[(1093, 788), (931, 787), (923, 931), (1087, 932)]      <---- second Image panel coords in correct order
[(1077, 736), (977, 732), (973, 834), (1073, 838)]
[(961, 699), (859, 695), (855, 797), (957, 801)]
[(1013, 763), (911, 759), (909, 861), (1011, 865)]
and-viceversa commented 4 years ago

Thank you for the quick response. Agreed on testing Capture. I'm slowly building out a CLI for bulk processing and checking steps along the way.

poynting commented 4 years ago

I'm a bit confused. If the corners weren't detected, where are the corners coming from?

Also, is there a reason you're running into that the order of the panel corners matters? The order of the QR can matter in some cases, but as long as the panel corners are a valid rectangle does it matter?

and-viceversa commented 4 years ago

The bulk processing notebook doesn't integrate the "diagnostic" plots into the output. So I'm trying to integrate the "Smoothed panel region in reflectance image" plot from Tutorial 1.

Tutorial 1 notebook appears to hardcode the panel region, and I wanted to automate that using Image.panel_region attribute. In cases where a panel_region is None I used Panel.panel_corners to detect the box. Since the order is unpredictable, I can't assign coordinate like the tutorial in

panelRegionRefl = reflectanceImage[uly:lry, ulx:lrx]
panelRegionReflBlur = cv2.GaussianBlur(panelRegionRefl,(55,55),5)

The Capture class already has methods for the various plots except gaussian blur reflectance for the panel area.

poynting commented 4 years ago

OK, sorry, I just re-read this from the top, and I think I understand now.

The issue is that you are seeing a different order between corners detected by the python library, and those auto-detected by the camera. That may be true, so it would probably be best to handle that in the metadata class to normalize things in one place. That may break some tests which would require updating.

The other approach would be to ignore it at the library level since it's sort of an opencv region thing, and have an imageutils function that translates coordinate tuples to openCV friendly orders.

However, and I can't stress this enough, the library already handles a lot of things like what (I think) you are trying to do when you go through the capture class. Not this specific ordering issue, but, for example, if the camera has auto-detected the corners, you don't need to have the library do that, it's slowing your processing down. Using Captures objects can handle lazy-loading and lazy-computing those kinds of things and only detecting when it absolutely needs to.

and-viceversa commented 4 years ago

The issue is that you are seeing a different order between corners detected by the python library, and those auto-detected by the camera.

Thanks @poynting! Perfect summation. Here's a write up for the issue and workaround for anybody else who finds it useful.

I want to build a CLI that is similar to the Batch Processing notebook. However, the other tutorials have some great diagnostic outputs that are absent from batch processing. It's trivial to add most of them, something like the following snippet:

input_top_dir = '/path/to/micasense/images/0001SET'
output_dir = 'data/output'

panel_images = glob.glob(os.path.join(input_top_dir, '000', 'IMG_0000_*.tif'))

panel_cap = Capture.from_file_list(panel_images)

# call all Capture.plot methods. show and file_path are my own **kwargs to output a png.
panel_cap.plot_raw(show=False, file_path=os.path.join(output_dir, '1_raw'))
panel_cap.plot_panels(show=False, file_path=os.path.join(output_dir, '2_panels'))
panel_cap.plot_radiance(show=False, file_path=os.path.join(output_dir, '3_radiance'))
panel_cap.plot_vignette(show=False, file_path=os.path.join(output_dir, '4_vignette'))
panel_cap.plot_undistorted_radiance(show=False, file_path=os.path.join(output_dir, '5_undistorted_radiance'))

irradiance_list = panel_cap.panel_irradiance()
panel_cap.plot_undistorted_reflectance(irradiance_list=irradiance_list, show=False,
                                       file_path=os.path.join(output_dir, '6_undistorted_reflectance'))

However, Tutorial 1 has an output plot not offered from the Capture class. Code block 6 shows how to output only the blurred reflectance panel area which is a useful diagnostic for the panel capture and the panel itself. The tutorial uses hardcoded values to grab the panel coordinate space, but it's better to automate that for the CLI. The first problem occurs in case when the camera doesn't auto-detect the panel. Look at this output of Capture.plot_panels().

2_panels

In the green band, note how there is a red box around the QR code and a blue box around the reflectance panel. This means that the panel was not auto-detected by the camera. Therefore it has None coordinate value at Capture.images[1].panel_region. No problem, because you can use Panel.panel_corners() to detect the panel box in Python.

The second problem is that Panel.panel_corners() does not return point coordinates in the same order as Image.panel_region. We need to sort these into a predictable order in order to automate code block 6 from the tutorial.

I added the following method to the Panel class. Same code as in the comments above.

    def panel_coords_in_order(self):
        """
        The Panel.panel_corners() call seems to return corners in an inconsistent order. Helper method to predictably
        sort corner order.
        :return: [ (ur), (ul), (ll), (lr) ] to mirror Image.panel_region attribute order
        """
        pc = self.panel_corners()
        pc = sorted(pc, key=lambda x: x[0])

        # get the coordinates on the "left" and "right" side of the bounding box
        left_coords = pc[:2]
        right_coords = pc[2:]

        # sort y values ascending for correct order
        left_coords = sorted(left_coords, key=lambda y: y[0])
        right_coords = sorted(right_coords, key=lambda y: y[0])

        return [tuple(right_coords[1]), tuple(left_coords[1]), tuple(left_coords[0]), tuple(right_coords[0])]

Then run something like:

# core code copied from tutorial 1 block 6. plotutils.plot_with_color_bar() has added functionality.
for i, panel in enumerate(panel_cap.panels):
    ur, ul, ll, lr = panel.panel_coords_in_order()

    reflection_image = panel_cap.images[i].reflectance(irradiance=irradiance_list[i])

    panel_region_reflectance = reflection_image[ul[1]:lr[1], ul[0]:lr[0]]
    panel_region_reflectance_blur = cv2.GaussianBlur(panel_region_reflectance, (55, 55), 5)

    plotutils.plot_with_color_bar(img=panel_region_reflectance_blur,
                                  title=f'Band {i} Image Panel Region Reflectance',
                                  plot_text='Min Reflectance in panel region: {:1.2f}\n'
                                                'Max Reflectance in panel region: {:1.2f}\n'
                                                'Mean Reflectance in panel region: {:1.2f}\n'
                                                'Standard deviation in region: {:1.4f}\n'
                                                'Ideal is <3% absolute reflectance'.format(
                                      panel_region_reflectance.min(),
                                      panel_region_reflectance.max(),
                                      panel_region_reflectance.mean(),
                                      panel_region_reflectance.std()
                                  ),
                                  show=False,
                                  file_path=os.path.join(output_dir, f'refl_panel_region_blur_band_{i}'))

Here is one of the output plots. refl_panel_region_blur_band_1

Closing this issue. Thanks again for the insight.

poynting commented 4 years ago

@and-viceversa thanks for the full summary. It seems like this would be great as a pull request if you would like to create one.

and-viceversa commented 4 years ago

I'm still exploring the downstream of this ... but I think a PR is incoming.

I like the idea of forcing camera auto-detection OR Python detection for the panel region. Not a mix of both, as is currently possible.

poynting commented 4 years ago

Differences in the region will change the radiance to reflectance factor slightly, but since the region should be pretty flat in aggregate, it shouldn't be too much.

The most performant option is to take the camera detection if it's there, and only compute if it's not. I think in the more recent camera firmwares the panel detection is all or nothing - if it doesn't find the panel in all bands it won't save that capture. There was a version where that wasn't the case however so it's good to handle both.

As far as the aligned stacks it shouldn't matter, getting the panel region is a band-by-band effect.

and-viceversa commented 4 years ago

@poynting I checked out the downstream effects of this issue - that panel coordinates detected by the Altum are ordered differently than panel coordinates detected in Python.

My tests show that coordinate order does not matter when passed to Panel.region_stats(). I used identical coordinates both in camera and Python detected order with identical result. Therefore, calculating irradiance mean will be done correctly in cases where not all panels are auto-detected by the camera.

Differences in the region will change the radiance to reflectance factor slightly, but since the region should be pretty flat in aggregate, it shouldn't be too much.

Understood. This was my other concern, that different region sizes may affect reflectance factor.

Thanks again for you attention on this and the awesome processing library.