Open legan78 opened 3 years ago
Hello,
the loss printed out is the negative log-likelihood, it's completely normal that this becomes negative. Note that in the paper we stated the log-likelihood, so you need to invert the sign.
Despite that, the warning indicates that there is something wrong and not all variables are updated properly. If I remember correctly I had similar issues when transitioning from TensorFlow 1.x to 2.x. Which version are you using? The code is tested with 1.13.
Best wishes, Philipp
Thanks a lot for the clarification @pbecker93 In such a case, the gaussian nll can be negative when the shape is highly peaked with very small variances.
My framework is TF2, but I have used tf.compat.v1 to be able to run the RKN. Did you observe any difficulties while transitioning from 1.x to 2.x?
Best, ANMG
Hi,
First of all thanks a lot for your great contribution! I have recently attempted to reproduce the results for the pendulum state estimation. However, I have encountered the learning and validation loss becomes negative after a few epochs:
Epoch 18/500 40/40 - 25s - loss: -2.5637e+00 - rmse: 0.2772 - val_loss: -2.5312e+00 - val_rmse: 0.2866
I don't know if it related or not but also I have the following TF warnings.
WARNING:tensorflow:Gradients do not exist for variables ['pendulum_state_estem_rkn/rnn/rkn_transition_cell/tm_11_basis:0', 'pendulum_state_estem_rkn/rnn/rkn_transition_cell/tm_12_basis:0', 'pendulum_state_estem_rkn/rnn/rkn_transition_cell/tm_21_basis:0', 'pendulum_state_estem_rkn/rnn/rkn_transition_cell/tm_22_basis:0', 'pendulum_state_estem_rkn/rnn/rkn_transition_cell/log_transition_covar:0'] when minimizing the loss.
I have noticed that such variables are not being assigned to any internal model variable (self). Do you have any idea what could be happening?
Thanks in advance, ANMG