Closed won-bae closed 7 months ago
Hi authors, As a followup question for #2, I am confused with log transformation. Here, you (and intensity-free paper) assume that input times follows MixLogNormal. Then, by definition, np.log(times) follows MixNormal. But then, I am not sure why each mixture component of times = torch.log(times + 1) follows LogNormal instead of Normal in https://github.com/mbilos/neural-flows-experiments/blob/bd19f7c92461e83521e268c1a235ef845a3dd963/nfe/experiments/tpp/model.py#L153 given that intensity-free repo also uses MixNormal for the log transformed input (also normalized) in https://github.com/shchur/ifl-tpp/blob/e7ebab1ceab56cee440bd8e99b5c1bd42d6ada07/code/dpp/models/log_norm_mix.py#L40. Could you elaborate it?
times
MixLogNormal
np.log(times)
MixNormal
times = torch.log(times + 1)
LogNormal
Normal
@mbilos Could you clarify this?
This looks like a bug and it seems one should use normal distribution here, as you pointed out.
Hi authors, As a followup question for #2, I am confused with log transformation. Here, you (and intensity-free paper) assume that input
times
followsMixLogNormal
. Then, by definition,np.log(times)
followsMixNormal
. But then, I am not sure why each mixture component oftimes = torch.log(times + 1)
followsLogNormal
instead ofNormal
in https://github.com/mbilos/neural-flows-experiments/blob/bd19f7c92461e83521e268c1a235ef845a3dd963/nfe/experiments/tpp/model.py#L153 given that intensity-free repo also usesMixNormal
for the log transformed input (also normalized) in https://github.com/shchur/ifl-tpp/blob/e7ebab1ceab56cee440bd8e99b5c1bd42d6ada07/code/dpp/models/log_norm_mix.py#L40. Could you elaborate it?