Closed dtch1997 closed 1 year ago
M_loss
computes the matrix on the LHS of eq 16 in the paper and makes sure that it's maximum eigenvalue is less than -0.1, so if your maximum eigenvalue is -0.01 then the metric should still be valid but will have positive M_loss
(but empirically, when the maximum eigenvalue is close to 0 then the contraction starts getting unacceptably slow in the direction of the corresponding eigenvector).
a. Yes, with the same -0.1 caveat from above.
b. For simple dynamics (like the ones we use), I would expect to see M_loss
go to zero. If it doesn't, then I would check the empirical performance of the controller to see if it's "good enough" (the code here should log plots of controller performance to tensorboard throughout training). Sometimes you get situations where the metric isn't valid near the boundary of the training space but it's valid on the interior.train_cm.py
script.I'm happy to answer these more research-y questions, but let's move them over to the (newly activated) discussion section of this repo and reserve issues for bugs/feature requests for the code itself.
I have a couple questions about learned contraction metrics in
train_cm.py
:M_loss
must converge to0
? a. Corollary, ifM_loss > 0
, does it imply that there exists somex, x*, u*
in the dataset on which the learned metric is not a valid contraction metric? b. Do you observe empirically that theM_loss
eventually goes to 0?Thanks!