ant-research / EasyTemporalPointProcess

EasyTPP: Towards Open Benchmarking Temporal Point Processes
https://ant-research.github.io/EasyTemporalPointProcess/
Apache License 2.0
203 stars 14 forks source link

Interpretability of models #22

Closed cckao closed 2 months ago

cckao commented 2 months ago

Hi,

Thanks for this great open source project. It definitely is a great tool to solve many real life problems. Since I am a total newbie to this domain, I hope you could give me some advices regarding the interpretability.

For example, I get a prediction saying the next event will be event A. Is possible to get something like “the model predicts the coming of event A because there are events B and C in the input event sequence”?

Thank you.

iLampard commented 2 months ago

Hi,

The prediction is better interpreted as "the model predicts event A given the history of the events (which contains a sequence of event A, B, C etc)". You can have a look at Neural Hawkes Process (https://arxiv.org/abs/1612.09328) or other papers for a more detailed explanation.

cckao commented 2 months ago

Hi,

Thanks for the pointer. It's a helpful guide.

Regarding the interpretability, I am sorry I didn't address my question clearly. What I meant is, can we get an estimation about how much event B (or any other event) contributes to the prediction of event A? Something similar to the feature importance in Gradient Boosted Decision Trees.

iLampard commented 2 months ago

Hi,

The causality matrix can solve your problem. You can have a look at this paper https://arxiv.org/abs/2002.07906.

However, we have not implemented this model in EasyTPP as it is not a standard tpp, which is hard to fit in our current codebase.

cckao commented 2 months ago

Hi,

Understood. Thanks a lot for your advice.