aeon-toolkit / aeon

A toolkit for machine learning from time series
https://aeon-toolkit.org/
BSD 3-Clause "New" or "Revised" License
1.02k stars 123 forks source link

[ENH] Interpretability for HIVE-COTE V2 Classifier #663

Open nandinib1999 opened 1 year ago

nandinib1999 commented 1 year ago

Describe the issue

Hi Team, Just wanted to check if there are any interpretability frameworks that are currently supported for HIVECOTE V2 Classifier like SHAP, LIME, etc? If not, does the team plan to integrate anything in the near future?

Suggest a potential alternative/fix

No response

Additional context

I am working on a time-series classification problem statement using HIVECOTE V2 and would really love to interpret the model predictions.

Also, how do you suggest interpreting the results in the current scenario?

TonyBagnall commented 1 year ago

hi, happy to hear you are using HC2. Short answer is that there are no formal mechanisms as yet, and our plans to include them are more medium term than short term. HC2 is a meta ensemble, so the first useful information is the weights for each of the four components

   stc_weight_ : float
        The weight for STC probabilities.
    drcif_weight_ : float
        The weight for DrCIF probabilities.
    arsenal_weight_ : float
        The weight for Arsenal probabilities.
    tde_weight_ : float
        The weight for TDE probabilities.

If one component is noticeably better on the train data (which is used to form these weights) then you an look more closely at the internal workings of that component. For example, the shapelets notebook shows how to recover the most discriminatory shapelets and we can find feature relevance graphs I think. Matthew is off this week but can no doubt give much better advise than me when he comes back. We would really like to hear what sort of thing you are thinking of in terms of interpretability.