linaskerath / RP_Greenland

1 stars 0 forks source link

Feature importance (LIME, SHAP) #18

Open ninazahn opened 1 year ago

ninazahn commented 1 year ago

Enhancing the training set with additional features does not guarantee better performance. Some features might not carry any valuable information and thus only contribute as noise. In future work, we want to analyse the existing feature importance as well as of any new feature that we add. As the area we are predicting on is static, it is easy for the model to “memorize” pixel positions. This could lead to it learning information related to location rather than trends and sensible rules about snow melt. During the feature importance analysis, we can detect these kinds of issues and prevent them in the future.

Investigate and/or reverse engineer the decision process of the model, find counterfactuals, do permutation testing, etc.

linaskerath commented 1 year ago

need to focus separately on aggregates( convolutions) - maybe add or remove some of those - test which aggregates are good predictors and which convolution windows are appropriate