credo-ai / credoai_lens

Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
https://credoai-lens.readthedocs.io/en/stable/
Apache License 2.0
46 stars 6 forks source link

Logger lines in performance.py throw an error #203

Closed esherman-credo closed 1 year ago

esherman-credo commented 1 year ago

https://github.com/credo-ai/credoai_lens/blob/f73d4bf75a4a2aac8944031cc071459e7f5d3f1f/credoai/evaluators/performance.py#L226 and https://github.com/credo-ai/credoai_lens/blob/f73d4bf75a4a2aac8944031cc071459e7f5d3f1f/credoai/evaluators/performance.py#L239 both use self which is not passed to this static method. Throws a self not Defined error.

Likely introduced during recent changes to process_metrics handling?

More details: On a fresh conda environment for gini feature development with fresh install of lens. Running quickstart.ipynb

This cell:

image

(Part of) Traceback:

image
fabrizio-credo commented 1 year ago

@IanAtCredo I will fix this, it is only triggered by equal_opportunity in the example above, which is a FAIRNESS type metric. I will update tests to include this type of metric as well.

@esherman-credo good find 🙌

esherman-credo commented 1 year ago

@fabrizio-credo Ian suggested I just take care of it with the gini pr. See https://github.com/credo-ai/credoai_lens/pull/205