worldcoin / open-iris

Open Iris Recognition Inference System (IRIS)
MIT License
246 stars 50 forks source link

Pipeline fails when processing an image acquired in the visible spectrum #48

Closed cleosia closed 3 weeks ago

cleosia commented 2 months ago

Pipeline fails when processing a image acquired in the visible spectrum

Context

I'm using the IRIS pipeline to outline the pupil center and iris from images acquired with a "standard" camera (Canon EOS R). I convert the RGB images to gray-scale using img_pixels = cv2.cvtColor(img_pixels, cv2.COLOR_RGB2GRAY), then run the pipeline as usual:

full_env = iris.Environment(
    pipeline_output_builder=build_simple_debugging_output,
    error_manager=store_error_manager,
    disabled_qa=[
        iris.nodes.validators.object_validators.Pupil2IrisPropertyValidator,
        iris.nodes.validators.object_validators.OffgazeValidator,
        iris.nodes.validators.object_validators.OcclusionValidator,
        iris.nodes.validators.object_validators.IsPupilInsideIrisValidator,
        iris.nodes.validators.object_validators.IsMaskTooSmallValidator,
        iris.nodes.validators.cross_object_validators.EyeCentersInsideImageValidator,
        iris.nodes.validators.cross_object_validators.ExtrapolatedPolygonsInsideImageValidator,
    ],
    call_trace_initialiser=iris.PipelineCallTraceStorage.initialise,
)

iris_pipeline = iris.IRISPipeline(env=full_env) # note using default env. here does not solve the problem
output = iris_pipeline(img_data=img_pixels, eye_side=eye_side)  # changing eye side does not solve the problem

This works very well in most cases but, in some of them (about 14%), it fails with the the following error message:

{'error_type': 'VectorizationError',
 'message': '_find_class_contours: Number of contours must be equal to 1.',
 'traceback': '  File "D:\\Sources\\IkomIris\\ImageProcessingAutomation\\dev\\env\\lib\\site-packages\\iris\\pipelines\\iris_pipeline.py", line 131, in run
    _ = self.nodes[node.name](**input_kwargs)
  File "D:\\Sources\\IkomIris\\ImageProcessingAutomation\\dev\\env\\lib\\site-packages\\iris\\io\\class_configs.py", line 67, in __call__
    return self.execute(*args, **kwargs)
  File "D:\\Sources\\IkomIris\\ImageProcessingAutomation\\dev\\env\\lib\\site-packages\\iris\\io\\class_configs.py", line 78, in execute
    result = self.run(*args, **kwargs)
  File "D:\\Sources\\IkomIris\\ImageProcessingAutomation\\dev\\env\\lib\\site-packages\\iris\nodes\\vectorization\\contouring.py", line 75, in run
    geometry_contours = self._find_contours(geometry_mask)
  File "D:\\Sources\\IkomIris\\ImageProcessingAutomation\\dev\\env\\lib\\site-packages\\iris\\nodes\\vectorization\\contouring.py", line 90, in _find_contours
    pupil_array = self._find_class_contours(mask.pupil_mask.astype(np.uint8))
  File "D:\\Sources\\IkomIris\\ImageProcessingAutomation\\dev\\env\\lib\\site-packages\\iris\\nodes\\vectorization\\contouring.py", line 117, in _find_class_contours
    raise VectorizationError("_find_class_contours: Number of contours must be equal to 1.")
'}

Unfortunately, I can't provide you with an image sample to respect subject anonymity. Looking at the call stack and source code, it seems to fail in the pupil detection.

I'm aware the pipeline is supposed to process IR images. Yet, are there some recommended pre-processing steps so I can sucessfully use the pipeline on visible images? Thanks!

wiktorlazarski commented 1 month ago

Hey @cleosia,

Thank you for your interest in open-iris and raising that issue.

Regarding your issue, there is not much that I can recommend you here. The error that you are seeing is related to bad segmentation map being produced. Specifically, a semantic segmentation model produced a scattered segmentation map that contains more than 1 blobs for iris, pupil or eyeball class that are bigger than hardcoded in this function thresholds. For more context on that checkout ContouringAlgorithm.

Having said that there are two things you may try to improvement of overall working of open-iris for your data: 1) First thing you can test is modifying mentioned above threshold such that filtering blobs in contouring algorithm will lead to only one blob being considered for further processing. 2) Second idea you may try is to modify/improve input image such that it "reminds" more images that we used for training our semantic segmentation model. More details on that you can find in the SEMSEG_MODEL_CARD.md file. From our intuition, when we spend some time on analysing what our semantic segmentation model is guided by when doing inference, we find out that high frequencies (edges) are very frequently extracted as features for further semantic segmentation maps generation. Therefore, I can advise you to boost edges signal in your images. This may help to produce more consistent, not scattered, geometry segmentation maps and will remove the error you're encountering.

Hope that helps or at least gives you some intuition on what you may work on to improve your results!

Best regards, Wiktor

wiktorlazarski commented 3 weeks ago

Hey @cleosia,

Please, let me know if you manage to solve your problem, so we can close this issue, or you still would like some support from our side?

Best regards, Wiktor

cleosia commented 3 weeks ago

Hey Wiktor,

Thanks for your follow-up. I would have liked to give you a more complete feedback, but at least here is my current status on this: Between my email and your answer, I basically tried several RGB to grayscale conversion methods, trying successively each channel of the RGB image, and this reduced the failure ratio to 4%. The drawback of this improvement is that this issue fell down at the bottom of my "todo" stack, so that I couldn't try your improvement suggestions. Nor do I know when (and whether) it will become a priority again... So I think the best is to close this issue for now. Thanks for your support on this!

wiktorlazarski commented 3 weeks ago

Thank you for your feedback. I'm closing this issue for now. However, if at some point this becomes your priority again, please feel free to reach out to us. We can also hop on a call to discuss your issue and maybe give a better feedback.

Best regards, Wiktor