Closed vonkin closed 1 year ago
Thanks for reaching out. I will try to give a short wrap-up:
The ml-casadi package used the information of a PyTorch model in a CasADi graph by either re-creating the same graph in CasADi (for a minimal instruction set) or using an approximation of the PyTorch model in the CasADi graph, which can be updated via external parameter injection.
This package on the contrary uses the actual (traced) PyTorch model as an external CasADi function. As such it can be natively used like any other CasADi function within the CasADi graph.
So far only CPU is supported. But enabling GPU for the PyTorch models is only an engineering effort.
Regarding runtime: It is hard to make a generalizable statement here. As always - it depends on the use case.
Feel free to have a look at the examples.
Let me know if you have any further questions.
I probably understand what you mean. But what is the purpose of doing this? Does this method have an advantage in terms of computational speed compared to re-creating in casadi. I am currently working on a project using NN as an MPC model, and I am very concerned about the progress in this area, thank you for your answer.
Actually I have another question. When I looked at your ml-casadi example code, I found that there is a process of setting parameters after each solve. Get the parameter value by this line of code shown below.
params = learned_dyn_model.approx_params(np.stack(x_l, axis=0), flat=True)
My question is why use x_l as an argument to this function, does it mean that the model is approximated at the points contained by x_l ? x_l is the value of the previous solution. Is it reasonable to approximate the point of the previous solution when performing the next solution?
It mainly has the advantage of not being restricted to approximations. From a computational speed perspective, it depends on the network size. For tiny networks, a naive (ml-casadi) implementation should be faster (restricted to a specific instruction set. Basically MLPs). For more extensive networks, L4CasADi should be faster. I have not had the time nor the need to extensively compare computation time in various settings and use-cases.
You are correct that for the ml-casadi example, x_l
is the approximation point for the model. The question if it is reasonable to use the last solution as an approximation point for the next iteration depends on multiple factors of your use case: Is the next solution expected to be very far away from the last iteration? Are you using a Real-Time Iteration framework? ... Depending on those, it might or might not be reasonable to approximate the network with the previous solution.
Thank you very much for your answer. The following is my own understanding, please check if it is correct. At present, if I want to integrate a large nn model in mpc and meet the real-time requirements of calculation, I can only choose the approximate method, and I need to choose the approximate point properly. The selection of approximation points is crucial to whether the MPC can fully utilize the accuracy of the NN model.
In the end I'm very much looking forward to an NNMPC frame that can take advantage of the computing performance of GPU and doesn't require approximations.
Thank you very much.
Depends on the exact definition of "large" and your "real-time requirements". I suggest you try L4CasADi to understand how close this brings you to your requirements. As I was saying, I think GPU support would be relatively easy technically. If this is THE key feature you are missing I can try and look into it.
Hi Tim, Could you please tell me how to install this package? It's kind of confused. I have cloned this package to local, and run "python setup.py install". Then I checked the installed package using pip list, a new package named "UNKNOWN" have been installed. And I can't import l4casadi in python.
Please use pip install .
and not python setup.py install
as stated in the README instructions. The installation procedure will compile the l4casadi c++ library during installation.
I have tried to use pip install .
as the picture showed, but it come out the same result
Which Python version are you using? Please make sure to use Python 3.9 or higher.
Best Tim
3.10.6
I downloaded this repo using git clone
first. Then using pip install .
. Is this procedure right ?
I suspect the initial call to python setup.py install
created a weird build state. If I run python setup.py install
, a subsequent pip install .
will fail for me too. (With a different error message than what you are getting)
Can you make sure to clean all artifacts from the 'python setup.py install' invocation or, even better, clone the repo to a fresh folder?
If this still fails please post the output of pip install . -v
from the freshly cloned repo.
Best Tim
I deleted the original folder and cloned the repo in a new directory
Using pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)
Defaulting to user installation because normal site-packages is not writeable
Processing /home/vonkin/l4casadi
Running command pip subprocess to install build dependencies
Collecting setuptools>=42
Using cached setuptools-68.0.0-py3-none-any.whl (804 kB)
Collecting scikit-build>=0.13
Using cached scikit_build-0.17.6-py3-none-any.whl (84 kB)
Collecting cmake>=3.18
Using cached cmake-3.26.4-py2.py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (24.0 MB)
Collecting ninja
Using cached ninja-1.11.1-py2.py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (145 kB)
Collecting tomli
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting packaging
Using cached packaging-23.1-py3-none-any.whl (48 kB)
Collecting distro
Using cached distro-1.8.0-py3-none-any.whl (20 kB)
Collecting wheel>=0.32.0
Using cached wheel-0.40.0-py3-none-any.whl (64 kB)
Installing collected packages: ninja, cmake, wheel, tomli, setuptools, packaging, distro, scikit-build
Successfully installed cmake-3.26.4 distro-1.8.0 ninja-1.11.1 packaging-23.1 scikit-build-0.17.6 setuptools-68.0.0 tomli-2.0.1 wheel-0.40.0
Installing build dependencies ... done
Running command Getting requirements to build wheel
running egg_info
creating UNKNOWN.egg-info
writing manifest file 'UNKNOWN.egg-info/SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info/SOURCES.txt'
Getting requirements to build wheel ... done
Running command Preparing metadata (pyproject.toml)
running dist_info
creating /tmp/pip-modern-metadata-1wvnjuxf/UNKNOWN.egg-info
writing manifest file '/tmp/pip-modern-metadata-1wvnjuxf/UNKNOWN.egg-info/SOURCES.txt'
writing manifest file '/tmp/pip-modern-metadata-1wvnjuxf/UNKNOWN.egg-info/SOURCES.txt'
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: UNKNOWN
Running command Building wheel for UNKNOWN (pyproject.toml)
--------------------------------------------------------------------------------
-- Trying 'Ninja' generator
--------------------------------
---------------------------
----------------------
-----------------
------------
-------
--
Not searching for unused variables given on the command line.
-- The C compiler identification is GNU 11.3.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- The CXX compiler identification is GNU 11.3.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done (0.3s)
-- Generating done (0.0s)
-- Build files have been written to: /home/vonkin/l4casadi/_cmake_test_compile/build
--
-------
------------
-----------------
----------------------
---------------------------
--------------------------------
-- Trying 'Ninja' generator - success
--------------------------------------------------------------------------------
Configuring Project
Working directory:
/home/vonkin/l4casadi/_skbuild/linux-x86_64-3.10/cmake-build
Command:
/tmp/pip-build-env-t08w0cay/overlay/local/lib/python3.10/dist-packages/cmake/data/bin/cmake /home/vonkin/l4casadi/libl4casadi -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-t08w0cay/overlay/local/lib/python3.10/dist-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/home/vonkin/l4casadi/_skbuild/linux-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.6 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-t08w0cay/overlay/local/lib/python3.10/dist-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/usr/bin/python3 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPYTHON_LIBRARY:PATH=/usr/lib/x86_64-linux-gnu/libpython3.10.so -DPython_EXECUTABLE:PATH=/usr/bin/python3 -DPython_ROOT_DIR:PATH=/usr -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPython_NumPy_INCLUDE_DIRS:PATH=/usr/lib/python3/dist-packages/numpy/core/include -DPython3_EXECUTABLE:PATH=/usr/bin/python3 -DPython3_ROOT_DIR:PATH=/usr -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPython3_NumPy_INCLUDE_DIRS:PATH=/usr/lib/python3/dist-packages/numpy/core/include -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-t08w0cay/overlay/local/lib/python3.10/dist-packages/ninja/data/bin/ninja -DCMAKE_BUILD_TYPE:STRING=Release
Not searching for unused variables given on the command line.
-- The C compiler identification is GNU 11.3.0
-- The CXX compiler identification is GNU 11.3.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
Detected Linux
-- Downloading libtorch
https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-2.0.0%2Bcpu.zip
-- Downloading libtorch - done
-- Found Torch: /home/vonkin/l4casadi/libl4casadi/libtorch/lib/libtorch.so
-- Configuring done (34.9s)
-- Generating done (0.0s)
-- Build files have been written to: /home/vonkin/l4casadi/_skbuild/linux-x86_64-3.10/cmake-build
[1/3] Building CXX object CMakeFiles/l4casadi.dir/src/l4casadi.cpp.o
[2/3] Linking CXX shared library libl4casadi.so
[2/3] Install the project...
-- Install configuration: "Release"
-- Installing: /home/vonkin/l4casadi/_skbuild/linux-x86_64-3.10/cmake-install/l4casadi/libl4casadi.so
-- Set runtime path of "/home/vonkin/l4casadi/_skbuild/linux-x86_64-3.10/cmake-install/l4casadi/libl4casadi.so" to ""
running bdist_wheel
running build
running build_ext
running install
running install_lib
warning: install_lib: '_skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-3.10' does not exist -- no Python modules to install
running install_egg_info
running egg_info
writing manifest file 'UNKNOWN.egg-info/SOURCES.txt'
Copying UNKNOWN.egg-info to _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/UNKNOWN-0.0.0.egg-info
running install_scripts
Building wheel for UNKNOWN (pyproject.toml) ... done
Created wheel for UNKNOWN: filename=UNKNOWN-0.0.0-cp310-cp310-linux_x86_64.whl size=1825 sha256=9918a9f035a8aa88605d3d82e79ab0034d881dc66b96cb1d566d0c05624aab1b
Stored in directory: /tmp/pip-ephem-wheel-cache-lihu5ept/wheels/c1/8a/4b/5bc89154b8b0f1ad647963216cc480f5552907642c92c286b5
Successfully built UNKNOWN
Installing collected packages: UNKNOWN
Successfully installed UNKNOWN-0.0.0
The compilation and installation run successfully. However, the package name was picked up wrong.
According to [1], the problem of an "UNKNOWN" package name can come from an outdated pip or setuptools installation. Can you upgrade pip and setuptools? For reference, I am on pip 23.1.2
Another thing that catches my eye is Defaulting to user installation because normal site-packages is not writeable
. Are you working within a virtual env? (which I would recommend - I never tried to install this on the system-wide python env)?
Thank you so much. I have successfully installed the package in a virtual env
Hi, I added preliminary support for GPU/CUDA (See README for install instructions).
Let me know if it works for you. If so - please close this issue. Thanks
Thank you very much for your continued help, these are very important for my work
Can you give a more detailed introduction to this library? Can it only be used for cpu version of pytorch? How is the calculation speed of this version compared to the previous pytorch combined with acados?