credo-ai / credoai_lens

Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
https://credoai-lens.readthedocs.io/en/stable/
Apache License 2.0
46 stars 8 forks source link

Quick fixes for bugs relating to validation of string-containing DFs and default value for empty y_prob #323

Closed esherman-credo closed 1 year ago

esherman-credo commented 1 year ago

Describe your changes

Change default value for y_prob in fairness.py to None from tuple.

Change variance checking in tabular data with sensitive features to simply check to see that there is more than 1 value for each sensitive group for the outcome. Addresses issue with string-type outcomes (which have no std function)

Issue ticket number and link

https://credo-ai.atlassian.net/browse/DSP-464 https://credo-ai.atlassian.net/browse/DSP-465

Known outstanding issues that are not fully accounted for

N/A

Checklist before requesting a review

Extra-mile Checklist

github-actions[bot] commented 1 year ago

Coverage

Coverage Report
FileStmtsMissCoverMissing
credoai
   __init__.py30100% 
   _version.py10100% 
credoai/artifacts
   __init__.py70100% 
credoai/artifacts/data
   __init__.py00100% 
   base_data.py1171488%55, 155, 158, 173, 180, 187, 191, 196, 199, 202, 214, 217, 225, 241
   comparison_data.py631379%53, 60, 71, 76, 81, 90, 96, 100, 105, 114, 147, 153, 156
   tabular_data.py42686%52, 76, 80, 99, 101, 108
credoai/artifacts/model
   __init__.py00100% 
   base_model.py42295%57, 103
   classification_model.py632856%78–81, 98–142, 211–214
   comparison_model.py110100% 
   constants_model.py50100% 
   regression_model.py11464%41–43, 46
credoai/evaluators
   __init__.py30100% 
   data_fairness.py1601392%85–92, 100, 225, 252, 282–294, 411, 446–447
   data_profiler.py61493%49, 73–74, 93
   deepchecks_credoai.py40392%113–122
   equity.py113695%73, 154–156, 227–228
   evaluator.py73692%68, 71, 90, 116, 185, 192
   fairness.py125298%111, 260
   feature_drift.py59198%66
   identity_verification.py112298%144–145
   model_profiler.py1023269%93–99, 115–128, 156–159, 172–177, 180–212, 254–255, 264–265, 303
   performance.py86792%103, 124–130
   privacy.py118497%410, 447–449
   ranking_fairness.py1121488%144–145, 165, 184, 190–191, 387–409, 414–444
   security.py97199%309
   shap_credoai.py871484%117, 125–126, 136–142, 168–169, 251–252, 282–290
   survival_fairness.py674533%34–46, 51–62, 65–76, 79–97, 100, 103, 106
credoai/evaluators/utils
   __init__.py30100% 
   fairlearn.py18194%93
   utils.py34391%13, 41–42
   validation.py912770%11–12, 28, 48–49, 51–53, 60, 70, 72, 76–81, 94, 97, 100, 103–104, 121–128, 134–140, 143
credoai/governance
   __init__.py10100% 
credoai/lens
   __init__.py20100% 
   lens.py2061394%59, 201–202, 238–243, 300, 342, 366, 448, 463, 467, 479
   lens_validation.py763653%7–8, 45, 48–57, 73, 77, 82–86, 95, 100–103, 130, 133–153, 181–183
   pipeline_creator.py601280%20–21, 37, 79–91
   utils.py392828%20–27, 49–52, 71–82, 99, 106–109, 128–135
credoai/modules
   __init__.py30100% 
   constants_deepchecks.py20100% 
   constants_metrics.py190100% 
   constants_threshold_metrics.py30100% 
   metric_utils.py241825%9–24, 28–49
   metrics.py881385%63, 67, 70–71, 74, 84, 123, 135–140, 178, 185, 187
   metrics_credoai.py1895272%68–69, 73, 93–102, 107–109, 132–160, 176–179, 206, 230–231, 315–321, 397–403, 439–440, 510–511, 559, 663
   stats.py975048%15–18, 21–26, 29–31, 34–39, 42–56, 59–64, 106, 132–159, 191, 202–217
   stats_utils.py5340%5–8
credoai/prism
   __init__.py30100% 
   compare.py35294%71, 87
   prism.py36489%46, 48, 59, 86
   task.py17288%30, 37
credoai/prism/comparators
   __init_.py00100% 
   comparator.py17382%34, 42, 47
   metric_comparator.py44295%125, 131
credoai/utils
   __init__.py50100% 
   common.py1043368%55, 72–73, 79, 88–95, 106–107, 124–130, 135, 140–145, 156–163, 190
   constants.py20100% 
   dataset_utils.py613543%23, 26–31, 50, 54–55, 88–119
   logging.py551376%10–11, 14, 19–20, 23, 27, 44, 58–62
   model_utils.py784838%11–12, 21–26, 38–39, 42–43, 48–53, 69–114, 120–127
   version_check.py15473%13–17, 23
TOTAL321262381%