mbilos / neural-flows-experiments

Experiments for Neural Flows paper
https://arxiv.org/abs/2110.13040
85 stars 17 forks source link

Question about LogNormal distribution for TPP #4

Closed won-bae closed 7 months ago

won-bae commented 2 years ago

Hi authors, As a followup question for #2, I am confused with log transformation. Here, you (and intensity-free paper) assume that input times follows MixLogNormal. Then, by definition, np.log(times) follows MixNormal. But then, I am not sure why each mixture component of times = torch.log(times + 1) follows LogNormal instead of Normal in https://github.com/mbilos/neural-flows-experiments/blob/bd19f7c92461e83521e268c1a235ef845a3dd963/nfe/experiments/tpp/model.py#L153 given that intensity-free repo also uses MixNormal for the log transformed input (also normalized) in https://github.com/shchur/ifl-tpp/blob/e7ebab1ceab56cee440bd8e99b5c1bd42d6ada07/code/dpp/models/log_norm_mix.py#L40. Could you elaborate it?

won-bae commented 2 years ago

@mbilos Could you clarify this?

mbilos commented 2 years ago

This looks like a bug and it seems one should use normal distribution here, as you pointed out.