credo-ai / credoai_lens

Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
https://credoai-lens.readthedocs.io/en/stable/
Apache License 2.0
46 stars 8 forks source link

Feat/striate installation #316

Closed IanAtCredo closed 1 year ago

IanAtCredo commented 1 year ago

Describe your changes

Added the ability to install core functionality separate from heavier functionality. Please test various installations and missing requirements!

Checklist before requesting a review

Extra-mile Checklist

github-actions[bot] commented 1 year ago

Coverage

Coverage Report
FileStmtsMissCoverMissing
credoai
   __init__.py30100% 
   _version.py10100% 
credoai/artifacts
   __init__.py70100% 
credoai/artifacts/data
   __init__.py00100% 
   base_data.py1161389%54, 154, 157, 172, 179, 186, 190, 195, 198, 201, 213, 216, 224
   comparison_data.py631379%53, 60, 71, 76, 81, 90, 96, 100, 105, 114, 147, 153, 156
   tabular_data.py42686%52, 76, 80, 99, 101, 108
credoai/artifacts/model
   __init__.py00100% 
   base_model.py42295%57, 103
   classification_model.py572458%77–80, 97–141
   comparison_model.py110100% 
   constants_model.py50100% 
   regression_model.py11464%41–43, 46
credoai/evaluators
   __init__.py30100% 
   data_fairness.py1601392%85–92, 100, 225, 252, 282–294, 411, 446–447
   data_profiler.py61493%49, 73–74, 93
   deepchecks_credoai.py40392%113–122
   equity.py113695%73, 154–156, 227–228
   evaluator.py73692%68, 71, 90, 116, 185, 192
   fairness.py125298%111, 260
   feature_drift.py59198%66
   identity_verification.py112298%144–145
   model_profiler.py1023269%93–99, 115–128, 156–159, 172–177, 180–212, 254–255, 264–265, 303
   performance.py86792%103, 124–130
   privacy.py118497%410, 447–449
   ranking_fairness.py1121488%144–145, 165, 184, 190–191, 387–409, 414–444
   security.py97199%309
   shap_credoai.py871484%117, 125–126, 136–142, 168–169, 251–252, 282–290
   survival_fairness.py674533%34–46, 51–62, 65–76, 79–97, 100, 103, 106
credoai/evaluators/utils
   __init__.py30100% 
   fairlearn.py18194%93
   utils.py34391%13, 41–42
   validation.py912770%11–12, 28, 48–49, 51–53, 60, 70, 72, 76–81, 94, 97, 100, 103–104, 121–128, 134–140, 143
credoai/governance
   __init__.py10100% 
credoai/lens
   __init__.py20100% 
   lens.py2061394%59, 201–202, 238–243, 300, 342, 366, 448, 463, 467, 479
   lens_validation.py753652%7–8, 43, 46–55, 71, 74, 79–83, 92, 97–100, 127, 130–150, 178–180
   pipeline_creator.py601280%20–21, 37, 79–91
   utils.py392828%20–27, 49–52, 71–82, 99, 106–109, 128–135
credoai/modules
   __init__.py30100% 
   constants_deepchecks.py20100% 
   constants_metrics.py190100% 
   constants_threshold_metrics.py30100% 
   metric_utils.py241825%9–24, 28–49
   metrics.py881385%63, 67, 70–71, 74, 84, 123, 135–140, 178, 185, 187
   metrics_credoai.py1834973%68–69, 73, 93–102, 107–109, 132–160, 176–179, 206, 230–231, 294–296, 372–378, 414–415, 485–486, 534, 638
   stats.py975048%15–18, 21–26, 29–31, 34–39, 42–56, 59–64, 106, 132–159, 191, 202–217
   stats_utils.py5340%5–8
credoai/prism
   __init__.py30100% 
   compare.py35294%71, 87
   prism.py36489%46, 48, 59, 86
   task.py17288%30, 37
credoai/prism/comparators
   __init_.py00100% 
   comparator.py17382%34, 42, 47
   metric_comparator.py44295%125, 131
credoai/utils
   __init__.py50100% 
   common.py1043368%55, 72–73, 79, 88–95, 106–107, 124–130, 135, 140–145, 156–163, 190
   constants.py20100% 
   dataset_utils.py613543%23, 26–31, 50, 54–55, 88–119
   logging.py551376%10–11, 14, 19–20, 23, 27, 44, 58–62
   model_utils.py784838%11–12, 21–26, 38–39, 42–43, 48–53, 69–114, 120–127
   version_check.py15380%13–17
TOTAL319861481%