GaloisInc / dlkoopman

A general-purpose Python package for Koopman theory using deep learning.
https://pypi.org/project/dlkoopman/
MIT License
80 stars 11 forks source link

Ill-defined loss function depending on eigenvector phase #1

Closed souryadey closed 2 years ago

souryadey commented 2 years ago

Running StatePred.train_net() produces RuntimeError: linalg_eig_backward: The eigenvectors in the complex case are specified up to multiplication by e^{i phi}. The specified loss function depends on this quantity, so it is ill-defined.

souryadey commented 2 years ago

This happens due to numerical instabilities.


Possible cause:

The dominant eigenvalue has magnitude significantly greater than 1.

Capturing this:

The magnitude of the dominant eigenvalue is recorded in the log file after each epoch.


Possible cause:

Training data is not normalized.

Solution:

Set normalize_Xdata = True in config.py. This is the default.


Possible cause:

StatePred.rank is too high.

Solution:

Reduce rank to $<$ encoded_size. This is because rank is supposed to reduce the effective number of encoded states to a lower value than encoded_size. (Note that numerical issues can also occur if rank is higher than any of the numbers in encoder_hidden_layers).