StanfordASL / neural-network-lyapunov

Synthesizing neural-network Lyapunov functions (and controllers) as stability certificate.
MIT License
138 stars 29 forks source link

Learn a relu model to approximate the Pole dynamics #412

Closed hongkai-dai closed 2 years ago

hongkai-dai commented 2 years ago

This change is Reviewable

lujieyang commented 2 years ago

neural_network_lyapunov/examples/pole/learn_relu_dynamics.py, line 37 at r1 (raw file):

                constant_control_steps=1)[0]

    x_next_is_nan = torch.any(torch.isnan(x_next_samples), dim=1)

Why would x_next be nan? When the mass matrix M loses rank?

lujieyang commented 2 years ago

neural_network_lyapunov/examples/pole/learn_relu_dynamics.py, line 37 at r1 (raw file):

Previously, hongkai-dai (Hongkai Dai) wrote…
I add documentation to explain why x_next can be nan. This comes from when the current x_AB, y_AB has norm close to pole length and when the velocity is large (namely the azimuth angle is very small, the pole is close to falling over with large velocity), then the next x_AB, y_AB might have norm larger than pole length, which is not physically valid, and return a Nan.

Why wouldn't such scenario be eliminated by the criteria for the azimuth angle to be larger than 45?

lujieyang commented 2 years ago
:lgtm: