credo-ai / credoai_lens

Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
https://credoai-lens.readthedocs.io/en/stable/
Apache License 2.0
46 stars 8 forks source link

Feat/3.7 support #333

Closed IanAtCredo closed 1 year ago

IanAtCredo commented 1 year ago

Describe your changes

Removed requirements to add 3.7 support for Lens

To install python 3.7 on arm mac you'll have to do this:

## create empty environment
conda create -n py37

## activate
conda activate py37

## use x86_64 architecture channel(s)
conda config --env --set subdir osx-64

## install python, numpy, etc. (add more packages here...)
conda install python=3.7 numpy

Install lens from the test pypi server like so: pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple credoai-lens

Issue ticket number and link

Known outstanding issues that are not fully accounted for

Checklist before requesting a review

Extra-mile Checklist

github-actions[bot] commented 1 year ago

Coverage

Coverage Report
FileStmtsMissCoverMissing
credoai
   __init__.py30100% 
   _version.py10100% 
credoai/artifacts
   __init__.py70100% 
credoai/artifacts/data
   __init__.py00100% 
   base_data.py1171488%55, 155, 158, 173, 180, 187, 191, 196, 199, 202, 214, 217, 225, 241
   comparison_data.py631379%53, 60, 71, 76, 81, 90, 96, 100, 105, 114, 147, 153, 156
   tabular_data.py42686%52, 76, 80, 99, 101, 108
credoai/artifacts/model
   __init__.py00100% 
   base_model.py42295%57, 103
   classification_model.py693352%78–81, 97–101, 105–149, 218–221
   comparison_model.py110100% 
   constants_model.py50100% 
   regression_model.py11464%41–43, 46
credoai/evaluators
   __init__.py30100% 
   data_fairness.py1601392%95–102, 110, 235, 262, 292–304, 421, 456–457
   data_profiler.py61493%58, 82–83, 102
   deepchecks_credoai.py40392%128–137
   equity.py113695%86, 167–169, 254–255
   evaluator.py73692%68, 71, 90, 116, 185, 192
   fairness.py125298%122, 271
   feature_drift.py59198%78
   identity_verification.py112298%155–156
   model_profiler.py1023269%103–109, 125–138, 166–169, 182–187, 190–222, 264–265, 274–275, 313
   performance.py86792%114, 135–141
   privacy.py118497%422, 459–461
   ranking_fairness.py1121488%149–150, 170, 189, 195–196, 392–414, 419–449
   security.py97199%321
   shap_credoai.py871484%127, 135–136, 146–152, 178–179, 261–262, 292–300
   survival_fairness.py674533%47–59, 64–75, 78–89, 92–110, 113, 116, 119
credoai/evaluators/utils
   __init__.py30100% 
   fairlearn.py18194%93
   utils.py34391%13, 41–42
   validation.py912770%11–12, 28, 48–49, 51–53, 60, 70, 72, 76–81, 94, 97, 100, 103–104, 121–128, 134–140, 143
credoai/governance
   __init__.py10100% 
credoai/lens
   __init__.py20100% 
   lens.py2061394%59, 201–202, 238–243, 300, 342, 366, 448, 463, 467, 479
   lens_validation.py763653%7–8, 45, 48–57, 73, 77, 82–86, 95, 100–103, 130, 133–153, 181–183
   pipeline_creator.py601280%20–21, 37, 79–91
   utils.py392828%20–27, 49–52, 71–82, 99, 106–109, 128–135
credoai/modules
   __init__.py30100% 
   constants_deepchecks.py20100% 
   constants_metrics.py190100% 
   constants_threshold_metrics.py30100% 
   metric_utils.py241825%10–25, 29–50
   metrics.py881385%63, 67, 70–71, 74, 84, 123, 135–140, 178, 185, 187
   metrics_credoai.py1895272%68–69, 73, 93–102, 107–109, 132–160, 176–179, 206, 230–231, 315–321, 397–403, 439–440, 510–511, 559, 663
   stats.py964850%16–19, 22–27, 30–32, 35–40, 43–57, 60–65, 107, 133–160, 192, 205–221
   stats_utils.py5340%5–8
credoai/prism
   __init__.py30100% 
   compare.py36294%72, 88
   prism.py36489%46, 48, 59, 86
   task.py17288%30, 37
credoai/prism/comparators
   __init_.py00100% 
   comparator.py17382%34, 42, 47
   metric_comparator.py44295%125, 131
credoai/utils
   __init__.py50100% 
   common.py1043368%55, 72–73, 79, 88–95, 106–107, 124–130, 135, 140–145, 156–163, 190
   constants.py20100% 
   dataset_utils.py613543%23, 26–31, 50, 54–55, 88–119
   logging.py551376%10–11, 14, 19–20, 23, 27, 44, 58–62
   model_utils.py784838%11–12, 21–26, 38–39, 42–43, 48–53, 69–114, 120–127
   version_check.py15380%13–17
TOTAL321862581%