I've looked for this issue and was able to replicate the random behavior when calculating the eigenvectors. Effectively it may happen that some eigenvectors might be signed-flipped, if you execute the code at different times.
We implement some kind of "convention" to force/determine the behavior. For instance, "Every first value of an eigenvector must be positive, otherwise flip the sign of the entire vector". (Flipping sign of an eigenvector, doesn't affect the eigenvector property)
Add a Note in documentation just like sklearn does for SVD
Number 1 decreases performance a bit, and number 2 seems more reasonable. Consider the message I made a draft.
Resolves #211
I've looked for this issue and was able to replicate the random behavior when calculating the eigenvectors. Effectively it may happen that some eigenvectors might be signed-flipped, if you execute the code at different times.
None of the scipy functions we use to calculate the eigenvectors in LFDA supports something like a 'random_state': scipy.linalg.eigh, scipy.linalg.eig and scipy.sparse.linalg.eigsh
There are two alternatives:
Number 1 decreases performance a bit, and number 2 seems more reasonable. Consider the message I made a draft.