Closed xueminchi closed 3 months ago
Hi,
I have tried the demo of nerf_trajectory_optimization.py. It works successfully. Later, I trained an SDF model based on NN and designed a model predictive controller based on the learned SDF model using this package. Thanks to the package, it works without any issues other than the computation speed is a bit lower than the analytical SDF.
This is fantastic to hear! For speed, I recommend you to look at the preview of L4CasADi v2 [1].
The constraint should be possible trivially unless I understand something incorrectly. You could take a look at how it is done here:
Let me know if this answers your question or if I misunderstood anything. This does not have to be a model but can be a function. E.g.,
def g(input):
return (input[0] + input[1] - 10)[..., None]
g = l4c.L4CasADi(g, name='g', model_expects_batch_dim=False)(x)
Best Tim
Hi, @Tim-Salzmann
Thank you for the prompt response and the references. They are incredibly helpful and timely.
When designing a predictive controller, batch queries for the NN model are crucial for trajectory optimization since environments typically contain multiple obstacles or objects. Evaluating in batch, as opposed to in a for-loop, should be more efficient and leverage the advantages of PyTorch. I am eager to try the batch version demo.
The second reference to l4casadi/examples/simple_nlp.py is also very useful. I can now use torch.nn.Module and design various forward methods within the class.
In short, these demos are extremely helpful. Your answer has resolved my issues. Thanks again!
Best, @xueminchi
Hi, @Tim-Salzmann
thanks for the great package!
I have tried the demo of
nerf_trajectory_optimization.py
. It works successfully. Later, I trained an SDF model based on NN and designed a model predictive controller based on the learned SDF model using this package. Thanks to the package, it works without any issues other than the computation speed is a bit lower than the analytical SDF.I'm wondering if this package supports the direct modeling constraints based on Pytorch and Casadi variables equations, not a learning model. For example, I created a Casadi variable and evaluated an equation coded in Pytorch which defines a constraint (not an NN model), the gradient of this equation can be acquired by
autograd of Pytorch
. Is this possible? Currently, when I do this byl4c.L4CasADi
, the issue is that the evaluation is done byforward
while there is noforward
in common equations coded in Pytorch.Best, Xuemin