Pytorch implementation of the paper "Intensity-Free Learning of Temporal Point Processes", Oleksandr Shchur, Marin Biloš and Stephan Günnemann, ICLR 2020.
The master
branch contains a refactored version of the code. Some of the original functionality is missing, but the code is much cleaner and should be easier to extend.
You can find the original code (used for experiments in the paper) on branch original-code
.
In order to run the code, you need to install the dpp
library that contains all the algorithms described in the paper
cd code
python setup.py install
A Jupyter notebook code/interactive.ipynb
contains the code for training models on the datasets used in the paper.
The same code can also be run as a Python script code/train.py
.
You can save your custom dataset in the format used in our code as follows:
dataset = {
"sequences": [
{"arrival_times": [0.2, 4.5, 9.1], "marks": [1, 0, 4], "t_start": 0.0, "t_end": 10.0},
{"arrival_times": [2.3, 3.3, 5.5, 8.15], "marks": [4, 3, 2, 2], "t_start": 0.0, "t_end": 10.0},
],
"num_marks": 5,
}
torch.save(dataset, "data/my_dataset.pkl")
RecurrentTPP is the base class for marked TPP models.
You just need to inherit from it and implement the get_inter_time_dist
method that defines how to obtain the distribution (an instance of torch.distributions.Distribution
) over the inter-event times given the context vector. For example, have a look at the LogNormMix model from our paper.
You can also change the get_features
and get_context
methods of RecurrentTPP
to, for example, use a transformer instead of an RNN.
numpy=1.16.4
pytorch=1.2.0
scikit-learn=0.21.2
scipy=1.3.1
Please cite our paper if you use the code or datasets in your own work
@article{
shchur2020intensity,
title={Intensity-Free Learning of Temporal Point Processes},
author={Oleksandr Shchur and Marin Bilo\v{s} and Stephan G\"{u}nnemann},
journal={International Conference on Learning Representations (ICLR)},
year={2020},
}