UTSAVS26 / PyVerse

PyVerse is an open-source collection of diverse Python projects, tools, and scripts, ranging from beginner to advanced, across various domains like machine learning, web development, and automation.
MIT License
69 stars 205 forks source link

[Code Addition Request]: Explainable AI: Using Local Interpretable Model-agnostic Explanations (LIME) & SHapley Additive exPlanations (SHAP) #1085

Closed inkerton closed 6 days ago

inkerton commented 2 weeks ago

Have you completed your first issue?

Guidelines

Latest Merged PR Link

577

Project Description

Explainable AI: Using LIME and SHAP

In the realm of machine learning, models often operate as "black boxes," making it difficult to understand how they arrive at their decisions. Explainable AI (XAI) seeks to demystify these models, providing insights into their inner workings. Two powerful techniques for achieving this are Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP).

LIME (Local Interpretable Model-Agnostic Explanations)

LIME focuses on explaining individual predictions rather than the entire model. It works by perturbing the input data and observing how the model's predictions change. LIME then fits a simple, interpretable model (like a linear model) to these perturbed instances and their corresponding predictions. This local model can be easily understood and provides insights into the factors that influenced the original model's prediction.

SHAP (SHapley Additive exPlanations)

SHAP, on the other hand, leverages game theory to assign importance to each feature in a model's prediction. It calculates Shapley values, which represent the average marginal contribution of a feature to the model's output across all possible feature combinations. By examining these Shapley values, we can understand how much each feature contributed to the final prediction.

Key Differences Between LIME and SHAP:

Feature | LIME | SHAP -- | -- | -- Focus | Local explanations for individual predictions | Global explanations for the entire model Model | Fits a simple, interpretable model locally | Uses game theory to calculate feature importance Visualization | Often uses bar charts or heatmaps to show feature importance | Uses force plots or decision plots to visualize feature contributions

When to Use LIME or SHAP:

Real-World Applications:

By using LIME and SHAP, we can enhance the transparency, accountability, and trust in AI systems. These techniques empower us to make informed decisions, identify biases, and improve the overall performance of machine learning models.

Full Name

inkerton

Participant Role

GSSOC

github-actions[bot] commented 2 weeks ago

🙌 Thank you for bringing this issue to our attention! We appreciate your input and will investigate it as soon as possible.

Feel free to join our community on Discord to discuss more!

github-actions[bot] commented 6 days ago

✅ This issue has been closed. Thank you for your contribution! If you have any further questions or issues, feel free to join our community on Discord to discuss more!