credo-ai / credoai_lens

Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
https://credoai-lens.readthedocs.io/en/stable/
Apache License 2.0
46 stars 6 forks source link

create stats functions, moved outcome distribution out of equity eval… #295

Closed IanAtCredo closed 1 year ago

IanAtCredo commented 1 year ago

…uator, simplified

Describe your changes

Modularized many equity functions. Added more "stats", removed unnecessary "describe" method (mostly overlapping with dataset fairness)

Issue ticket number and link

Known outstanding issues that are not fully accounted for

Checklist before requesting a review

Extra-mile Checklist

IanAtCredo commented 1 year ago

Looks good, although integration tests are failing in the export phase.

Potentially the removal of the describe method has altered the provided evidences and there is a mismatch between what platform expects and what gets created. 🤔

Yeah, that's what I believe is happening too.

github-actions[bot] commented 1 year ago

Coverage

Coverage Report
FileStmtsMissCoverMissing
credoai
   __init__.py30100% 
credoai/artifacts
   __init__.py70100% 
credoai/artifacts/data
   __init__.py00100% 
   base_data.py1071289%55, 155, 158, 173, 180, 187, 191, 195, 199, 211, 214, 221
   comparison_data.py631379%53, 60, 71, 76, 81, 90, 96, 100, 105, 114, 147, 153, 156
   tabular_data.py40685%52, 73, 77, 96, 98, 105
credoai/artifacts/model
   __init__.py00100% 
   base_model.py36294%56, 88
   classification_model.py230100% 
   comparison_model.py110100% 
   constants_model.py20100% 
   regression_model.py11464%43–45, 48
credoai/evaluators
   __init__.py150100% 
   data_fairness.py1471292%83–90, 205, 260–261, 287, 311, 334–340, 356
   data_profiler.py34294%57, 60
   deepchecks.py40392%113–122
   equity.py113695%73, 152–154, 225–226
   evaluator.py72790%67, 70, 89, 115, 135, 180, 187
   fairness.py113298%115, 228
   feature_drift.py59198%66
   identity_verification.py112298%144–145
   model_profiler.py741185%127–130, 144–147, 181–182, 191–192, 230
   performance.py84792%103, 124–130
   privacy.py118497%410, 447–449
   ranking_fairness.py1341490%136–137, 157, 178, 184–185, 382–404, 409–439
   security.py96199%297
   shap.py871484%117, 125–126, 136–142, 168–169, 251–252, 282–290
   survival_fairness.py675025%27–31, 34–46, 51–62, 65–76, 79–97, 100, 103, 106
credoai/evaluators/utils
   __init__.py30100% 
   fairlearn.py18194%93
   utils.py8188%9
   validation.py812865%14, 34–35, 37–39, 46, 67–74, 80–86, 89, 95–98, 105, 108, 111, 114–115, 119–121
credoai/governance
   __init__.py10100% 
credoai/lens
   __init__.py20100% 
   lens.py2011394%53, 195–196, 232–237, 294, 336, 360, 442, 457, 461, 473
   pipeline_creator.py601280%20–21, 37, 79–91
   utils.py392828%20–27, 49–52, 71–82, 99, 106–109, 128–135
credoai/modules
   __init__.py30100% 
   constants_deepchecks.py20100% 
   constants_metrics.py190100% 
   constants_threshold_metrics.py30100% 
   metric_utils.py241825%15–30, 34–55
   metrics.py881385%63, 67, 70–71, 74, 84, 123, 135–140, 178, 185, 187
   metrics_credoai.py1354765%68–69, 73, 93–102, 107–109, 132–160, 176–179, 206, 230–231, 294–296, 372–378, 414–415, 485–486
   stats.py975048%15–18, 21–26, 29–31, 34–39, 42–56, 59–64, 106, 132–159, 191, 202–217
   stats_utils.py5340%5–8
credoai/prism
   __init__.py30100% 
   compare.py35294%71, 87
   prism.py36489%46, 48, 59, 86
   task.py17288%30, 37
credoai/prism/comparators
   __init_.py00100% 
   comparator.py17382%34, 42, 47
   metric_comparator.py44295%125, 131
credoai/utils
   __init__.py50100% 
   common.py1023368%55, 68–69, 75, 84–91, 102–103, 120–126, 131, 136–141, 152–159, 186
   constants.py20100% 
   dataset_utils.py613543%23, 26–31, 50, 54–55, 88–119
   logging.py551376%10–11, 14, 19–20, 23, 27, 44, 58–62
   model_utils.py301163%14–19, 29–30, 35–40
   version_check.py11191%16
TOTAL287549383%