musically-ut / tf_rmtpp

Recurrent Marked Temporal Point Processes
MIT License
56 stars 16 forks source link

Hello, thank you for your implementation. I don't know if it's abnormal or not. #11

Open waystogetthere opened 3 years ago

waystogetthere commented 3 years ago

Hello, thank you for your implementation. It is well written and easy to go through.

I downloaded the code and ran the synthetic hawkes data set under the default setting I found that the predicted inter-event time remained the same. Figure_1

This plot is the mean predicted inter-event time and mean ground truth inter-event time for 64 sequences in hawkes data set.

Also, the training loss does not change significantly during the training.

Starting epoch... 0 2021-05-14 11:16:23.015432: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10 Loss during batch 0 last BPTT = -4.655, lr = 0.00999 Loss on last epoch = -4.6547, new lr = 0.00999, global_step = 70 Starting epoch... 1 Loss during batch 0 last BPTT = -6.581, lr = 0.00999 Loss on last epoch = -6.5805, new lr = 0.00999, global_step = 140 Starting epoch... 2 Loss during batch 0 last BPTT = -6.584, lr = 0.00998 Loss on last epoch = -6.5837, new lr = 0.00998, global_step = 210 Starting epoch... 3 Loss during batch 0 last BPTT = -6.585, lr = 0.00997 Loss on last epoch = -6.5846, new lr = 0.00997, global_step = 280 Starting epoch... 4 Loss during batch 0 last BPTT = -6.585, lr = 0.00997 Loss on last epoch = -6.5855, new lr = 0.00997, global_step = 350 Starting epoch... 5 Loss during batch 0 last BPTT = -6.586, lr = 0.00996 Loss on last epoch = -6.5863, new lr = 0.00996, global_step = 420 Starting epoch... 6 Loss during batch 0 last BPTT = -6.587, lr = 0.00995 Loss on last epoch = -6.5871, new lr = 0.00995, global_step = 490 Starting epoch... 7 Loss during batch 0 last BPTT = -6.588, lr = 0.00994 Loss on last epoch = -6.5879, new lr = 0.00994, global_step = 560 Starting epoch... 8 Loss during batch 0 last BPTT = -6.589, lr = 0.00994 Loss on last epoch = -6.5887, new lr = 0.00994, global_step = 630 Starting epoch... 9 Loss during batch 0 last BPTT = -6.590, lr = 0.00993 Loss on last epoch = -6.5895, new lr = 0.00993, global_step = 700 Starting epoch... 10 Loss during batch 0 last BPTT = -6.590, lr = 0.00992 Loss on last epoch = -6.5903, new lr = 0.00992, global_step = 770

I don't know if it's abnormal or not. Any reply will be appreciated. Thank you very much!

musically-ut commented 3 years ago

That is ... neither here nor there. I cannot say off the top of my head whether that is a good prediction or not, it seems to be the mean of the durations.

Does the same behavior persist with other example tests as well?

waystogetthere commented 3 years ago

Hello! Sorry for being away for a while. This week I am a bit overwhelmed for some deadlines. The RMTPP is a great work attracting continuous discussion. I am quite inspired and Thank you for your elegant work!


Yes the green line is the mean predicted duration at each event-stamp for all test cases.

pred_dur = test_time_preds - data['test_time_in_seq'][:, :test_time_preds.shape[1]]

And the red line indicates the ground truth mean duration at each event-stamp for all test cases.

gt_dur = data['test_time_out_seq'] - data['test_time_in_seq']

The graph is plot in a way:

plt.plot(np.mean(pred_dur, axis=0), label='Predicted Duration', color='green') plt.plot(np.mean(gt_dur,` axis=0), label='Ground Truth Duration', color='red')

However, I think this is not a good prediction as at each event-stamp it predicts the same duration. This is a mean plot and I manage to extract some predictions for specific test cases: Figure_1

I found that in your code the MAE is chosen as a metric to evaluate the model's time prediction performance. I may be more interested in the interval time vs time index plot, as the paper shows in figure 4. https://www.kdd.org/kdd2016/papers/files/rpp1081-duA.pdf

Thank you, Wish you a good day!