Increase Trust and Confidence in your ML models through Explainable AI
Business users may be reluctant to accept predictions made by ML models without understanding how the models made them. Lack of understanding leads to lower trust and acceptance. Data Scientists need mechanisms to understand why their models may be deviating from desired predictions and how they could make corrections. This talk will look at how to build transparency into ML pipelines and prediction, providing business users visibility to the process and outcomes.
This repository holds the presentation material and code that accompanies it.