Closed sunshineinsandiego closed 4 years ago
This is likely due to rounding errors. LMNN learns the L
matrix directly, so get_mahalanobis_matrix
is computing M = L.T @ L
. Your example then decomposes M to recover L, which I would bet is the source of the floating point precision loss.
Thanks, is there any way to recover L
directly without going the roundabout way of pulling M
from the LMNN module and then recomputing L
?
Hi @sunshineinsandiego, yes, you can get the learned transformation L
with the attribute lmnn.components_
Perfect! Thank you
Description
LMNN: Error recovering the linear transformation matrix (L.T) from the Mahalanobis matrix. I am trying to recover the linear transformation matrix (L.T) from a saved Mahalanobis matrix produced by the LMNN algorithm, and there seem to be quite a few differences between the manually transformed data (using X.dot(L.T)) and LMNN.transform(X). Is this a rounding or precision issue?
Steps/Code to Reproduce
Example:
Expected Results
The manual transformation of X[0:4, :] should be equal to the lmnn.transform() = first 4 rows of lmnn.fit_transform(X,Y).
Actual Results
The results should be equal, but they are not. Is this a rounding / precision error?
Versions
Darwin-17.7.0-x86_64-i386-64bit Python 3.7.7 (default, Mar 10 2020, 15:43:27) [Clang 10.0.0 (clang-1000.11.45.5)] NumPy 1.18.2 SciPy 1.4.1 Scikit-Learn 0.22.2.post1 Metric-Learn 0.5.0
Thank you!