shchur / ifl-tpp

Implementation of "Intensity-Free Learning of Temporal Point Processes" (Spotlight @ ICLR 2020)
https://openreview.net/forum?id=HygOjhEYDH
MIT License
81 stars 32 forks source link

Loss with NLL of mark and MAE of inter-event time #14

Closed maSteinbach closed 2 years ago

maSteinbach commented 3 years ago

Hi Oleksandr,

I want to model a marked TPP with LogNormMix. For that I also want to consider the MAE as a loss for the inter-event times. But I'm unsure how to combine it with the NLL loss of the predicted marks? I tried following and it worked quite well:

Loss = abs(E[p(tau)] - tau) + NLL(mark) = MAE(tau) + NLL(mark)

But is this a reasonable loss function?

Thank you for your help!

shchur commented 3 years ago

Yes, this is fine if you only care about MAE/MSE + accuracy. You could simply use the context tensor to obtain the (point estimate of the) next inter-event time and train using the objective function that you specified. The code will look something like

features = self.get_features(batch)
context = self.get_context(features)
m = F.softplus(self.linear_time(context))
mae = torch.abs(m - batch.inter_times).sum(-1)

Then you could train this model using the objective function that you wrote above (I assume the absolute value is missing in the first term after =).

You can interpret this as a TPP model where the conditional distribution over the inter-event times has density p(tau) ∝ exp(-|tau - m|). Note that this is not the Laplace distribution with mean m: Laplace distribution is supported on (-\infty, \infty), but this inter-event time distribution is supported on [0, infty). This distribution doesn't have a name, I don't know how to compute its density and I it's probably impossible to sample from it. However, if you only care about MAE/MSE + accuracy, this shouldn't be a problem. What we have here is rather a truncated Laplace distribution.

If, however, you compute MAE as mae = (torch.abs(m - batch.inter_times) / batch.inter_times).sum(-1), then I'm quite sure that this doesn't have an interpretation as a conditional density. You can still train your model with this loss just like before though, if you don't care about other things like sampling.

maSteinbach commented 3 years ago

Thank you for your reply! It helped me a lot.

Why exactly p(tau) ∝ exp(-|tau - m|)? Can I interpret this also as an TPP with p(tau) ∝ exp(-(tau - m)^2). So to say, a "Gaussian" distribution on [0, infinity)?

shchur commented 3 years ago

Yes, your statement about p(tau) ∝ exp(-(tau - m)^2) is correct. Btw, here is a Wikipedia page about such truncated distributions https://en.wikipedia.org/wiki/Truncated_distribution.

I will also provide some context here. A TPP is a generative model for variable-length continuous-time event sequences. We usually train such models by maximizing the log-likelihood of the training sequences. You can interpret some losses, such as MAE or MSE, as the negative log-likelihood of some TPP model where the conditional distribution p*(tau) over the inter-event times has a special form. However, it doesn't mean that only losses that have this interpretation are "valid". In fact, if you only care about a point estimate of the next inter-event times (as measured by MAE/MSE) or of the accuracy of mark prediction, you don't even need a TPP model–you can directly optimize the loss that you care about. You should only worry about the "probabilistic" interpretation if you want to draw samples from the trained generative model.

Put differently, I don't quite understand why MAE/MSE is used to evaluate TPPs. TPPs proposed in the literature usually define the entire distribution over the inter-event times, but MAE/MSE only care about a point estimate. I can totally imagine cases where MAE/MSE are useful metrics, but I don't think we should use TPPs in such scenarios–a simpler model that only produces a point estimate instead of the entire distribution will probably do much better.

maSteinbach commented 3 years ago

Thank you for the link to the Wikipedia page.

Yes, I agree with you that if one just need the point estimate, a TPP is not necessary. For my application I tried both approaches for computing the point estimate - with and without TPP. Calculating the point estimate from the context vector without TPP worked better.

giuseppecartella commented 4 months ago

Dear @shchur , thanks for your detailed explanation!

I am new to the topic. I am working with marked TPPs and I am interested in modelling the probabilistic nature of my problem but at the same time I would like to have a L1 loss between the predictions and the groun truths. My question is the followinf: Is it possible to sample during training with the reparametrization trick in order to generate the predictions and then apply the loss between prediction and ground truth?

Does it really make sense?

Should I follow the chapter entitled "sampling" in your paper?

Thank you very much. Giuseppe

shchur commented 4 months ago

Hi @giuseppecartella, you can model the inter-event time using Laplace distribution. In this case the log-likelihood training objective will be equivalent to the L1 loss between true & predicted inter-event times, and you will be able to use the reparametrization trick during sampling via the inverse transform of the Laplace distribution CDF.

In theory you could sample predicted inter-event times during training and somehow compare them to the true value, but this loss function will not work as well as log-likelihood, so based on my intuition I wouldn't recommend it.