Open ombhojane opened 7 months ago
Aim: To increase the interpretability of the solar forecasting model and enable users to understand the model's decision-making process
Steps:
A new command-line option or flag (e.g., --explain) will be added to the existing forecasting script (e.g., python forecast.py --explain).
When the --explain flag is provided, the forecasting script will perform the following steps:
a. Execute the existing forecasting function to generate solar energy predictions.
b. Integrate the selected XAI technique (e.g., SHAP, LIME, etc.) with the trained XG boost tree model.
After testing with data and performing iterations, a suitable XAI method will be selected
c. To compute and analyze the feature importances, decision paths, and local explanations using the XAI technique.
d. Display or visualize the model's decision-making process, highlighting the influential features and their respective contributions to the final prediction.
The visualizations and explanations generated by the XAI module will be presented in a user-friendly format, such as interactive plots.
I hope this helps to improve the model and build trust in the decisions model would predict, @peterdudfield @zakwatts what you say?
I've submitted a GSoC proposal related to this issue, I'm happy to hear feedbacks regarding to this, I'm also learning XAI daily and getting so interested, and willing to implement this feature in Opneclimatefix to make the model transparent, trustworthy and ultimately benefiting the users.
Detailed Description
The current solar forecasting model is a gradient boosted tree model, which can achieve high predictive accuracy but often lacks interpretability. It is proposed to evaluate the model using Explainable AI (XAI) techniques and increase its interpretability. This will involve:
Context
Understanding the model's decision-making process and the relative importance of input features is crucial for trust, transparency, and accountability in this open-source project. Increasing the model's interpretability using XAI techniques can:
• Facilitate the integration of domain knowledge, potentially improving model performance and interpretability. • Enhance transparency and trust among users and stakeholders. • Guide model refinement and improvement efforts based on insights gained from XAI analysis. • Assess the model's robustness and fairness across different geographic regions or weather conditions, identifying potential biases or inconsistencies.
Possible Implementation