Closed alhaddad-m closed 1 year ago
Hi Muhammad,
I am unsure if I understand the problem correctly: You are running out of GPU memory when using L4CasADi?
Please provide some more details so I better understand what is happening.
Best Tim
Thanks you Tim for replying. Actually there was an issue in our CNN model therefore this problem with memory is happened. So It is not related to L4CasADi. Thank you again.
Muhammad
One thing to keep in mind is that L4CasADi will trace 3 models to work: The forward pass, the jacobian, and the hessian. This means that the memory requirements will be larger compared to just running forward inference on the PyTorch model.
Hi Tim! I am trying to use L4CasADi with CNN model. The input size of model I have is ~ 140k. The model has more than million parameters. Actually I still have problems with cuda memory so I don't know if L4CasADi can process this model for implementation with Acados. The model works fine without L4CasADi. Do you have any suggestions for fixing this problem? Maybe there is a limit for model parameters or inputs.
Best, Muhammad