This repository contains the implementation of Dynamic Movement Primitives, in Python 3.5.
In particular, this repository contains all the synthetic tests done for the work
GINESI, Michele; SANSONETTO, Nicola; FIORINI, Paolo. Overcoming some drawbacks of dynamic movement primitives. Robotics and Autonomous Systems, 2021, 144: 103844.
https://doi.org/10.1016/j.robot.2021.103844
File gsf21.bib contains the bibentry. Please refer to this work when using the package!
The package can be installed by running
pip install -e .
or
pip3 install -e .
After installation, you can import the DMP class as
from dmp.dmp_cartesian import DMPs_cartesian as dmp
After importing, you can create a class using
MP = dmp()
You can personalize the parameters using keywoard arguments, use the help for additional details. After importing, you can create a trajectory and learning it.
t = np.linspace(0, np.pi, 100)
X = np.transpose(np.array([t * np.cos(2 * t), t * np.sin(2 * t), t * t]))
MP.imitate_path(X)
Finally, you can execute a DMP using
x_track, _, _, _ = MP.rollout()
See the demos/ folder to see scripts in which various options are tested and compared, as well as how to change start and goal position
This repository contains two folders, namely dmp/ and demos/. The dmp/ folder contains all the functions needed to implement DMPs, while the demo/ folder contains the scripts used to perform the tests presented on the paper.
dmp/ contain the following files:
compute_D1(n, dt)
returns the matrix which discretize the first derivative of a 1D function discretized on an equispaced time domain of n
points and dt
timestep, using a second orde estimate;compute_D2(n, dt)
returns the matrix which discretize the second derivative of a 1D function discretized on an equispaced time domain of n
points and dt
timestep, using a second orde estimate.exp_eul_step(y, A, b, dt)
returns the solution at time $ n+1$, computed as $ y_{n+1} = y_n + k \varphi_1(k A) (A y_n + b(t_n)) $ for the problem $ \dot{y} = A y + b(t) $, with $y_n$ = y
, $A$ = A
, $b(t_n)$ = b
, and $k$ = dt
.roto_dilatation(x0, x1)
returns the roto-dilatation matrix which maps x0
to x1
.demos/ contain the following files:
The term Dynamic Movement Primitives (DMPs) refers to a framework for trajectory learning based on second order ODE of spring-mass-damping type: $$ \begin{cases} \tau \dot{\mathbf{v}} = \mathbf{K} (\mathbf{g} - \mathbf{x}) - \mathbf{D} \mathbf{v} - \mathbf{K} ( \mathbf{g} - \mathbf{x}_0 ) s + \mathbf{K} \mathbf{f}(s) \ \tau \dot{\mathbf{x}} = \mathbf{v} \end{cases} , $$ where $\mathbf{x, v, g, x_0, f} \in \mathbb{R}^d$ are, respectively, position and velocity of the system, goal and starting positions, and the non-linear forcing term. Matrices $\mathbf{K,D}\in\mathbb{R}^{d\times d}$ are diagonal matrices representing the elastic and damping terms. Parameter $s \in \mathbb{R}$ is a re-parametrization of time, governed by the Canonical System $$ \tau \dot{s} = -\alpha s, \qquad \alpha > 0. $$
Forcing term $\mathbf{f}$ is written in terms of basis functions. Each component $f_j (s)$ is written as $$ fj(s) = \frac{\sum{i=0}^N \omega_i \psii(s)}{\sum{i=0}^N \psi_i(s)} s , $$ where $\omega_i\in\mathbb{R}$ and ${\psii(s)}{i=0}^N$ is a set of basis functions. In the literature, Ragial Gaussian basis functions are used: given a set of centers ${ci}{i=0}^N$ and a set of positive widths ${hi}{i=1}^N$, we have $$ \psi_i(s) = \exp( -h_i (s - c_i)^2 ). $$
We extend the approach to multiple set of basis functions. In particular, we propose to use various classes of Wendland's basis functions $$ \begin{aligned} \phii^{(2)} (s) & = (1 - r)^2+ \ \phii^{(3)} (s) & = (1 - r)^3+ \ \phii^{(4)} (s) & = (1 - r)^4+ (4r + 1) \ \phii^{(5)} (s) & = (1 - r)^5+ (5r + 1) \ \phii^{(6)} (s) & = (1 - r)^6+ (35 r ^ 2 + 18 r + 3) \ \phii^{(7)} (s) & = (1 - r)^7+ (16 r ^ 2 + 7 r + 1) \ \phii^{(8)} (s) & = (1 - r)^8+ (32 r ^ 3 + 25 r^2 + 8 r + 1) \ \end{aligned}$$ where $r = |h_i(s-ci)|$ and $(\cdot)+$ denotes the positive part. Moreover, we propose a set of mollifier-like basis functions $$ \varphi _i(s) = \begin{cases} \exp\left( - \dfrac{1}{1 - |a_i (s - c_i)| ^ 2} \right) & \text{if } |a_i (s - c_i)| < 1 \ 0 & \text{otherwise} \end{cases} . $$
These basis functions are plotted in the figure. The first is the Gaussian functions, the second the mollifier, the others are the Wendland. All of them are plotted using $c = 0$ and $h = 1$.
During the learning phase, a trajectory $\mathbf{x}(t)$ is recorded. This permits to evaluate the forcing term $\mathbf{f}(s)$. Then, the set of weights $\omega_i$ is computed using linear regression.
Then, the dynamical system can be integrated using the same weights when defining the forcing term, but possibly changing the initial and goal position. This will result in a trajectory of similar shape to the learned one, but adapted to the new points. Moreover, the goal position can change during the execution and convergence to it is still guaranteed.
DMPs can be written to be invariant under affine transformations. We have implemented this property in the particular case of roto-dilatation.
In the following figure, the desired (and learned) trajectory is plotted in blue. the new goal is represented by the black star. The dashed red line shows the execution obtained with classical DMPs, while the dash-dotted green line show the execution obtained by taking advantage of the affine invariance.
The "original" DMP formulation was slightly different:
$$ \begin{cases} \tau \dot{\mathbf{v}} = \mathbf{K} (\mathbf{g} - \mathbf{x}) - \mathbf{D} \mathbf{v} - \mathbf{K} ( \mathbf{g} - \mathbf{x}_0 ) s + (\mathbf{g} - \mathbf{x}_0) \odot \mathbf{f}(s) \ \tau \dot{\mathbf{x}} = \mathbf{v} \end{cases} . $$
It presented some drawbacks when the learning quantities $\mathbf{g}-\mathbf{x}_0$ is null or small in any direction. In particular
demos/old_vs_new.py
).To contact me please use one of the following mail addresses: