Code for M. Thor et al., Versatile modular neural locomotion control with fast learning, 2021, submitted to NMI.
Drawing inspiration from animal locomotion, we propose a simple yet versatile modular neural control structure with fast learning. The key advantages of our approach are that behavior-specific control modules can be added incrementally to obtain increasingly complex emergent locomotion behaviors, and that neural connections interfacing with existing modules can be quickly and automatically learned.
This code has been tested with the following hardware and software:
1 The Vortex physics engine requires a license (which is free for researchers). Alternatively, you can use the Newton physics engine, but for that, we cannot guarantee successful behaviors. Especially, the wall and pipe climbing behaviors does not work well with the Newton physics engine. We hypothesize that this is because the behavior was learned with the Vortex physics engine and because the behavior requires the higher complexity of that engine. To test the remaining behaviors an modified version of the advanced environment called Advanced_newton_env.ttt
has been made.
The following explains the content of the six main directories:
.json
files) for the base controller and eight behavior-specific modules presented in the paper. For a more detailed explaination see the README.md file in the data directory..lua
files for interfacing with and setting up the simulation. It also contains the build_dir
for cmake
.Install time will take 15-30 minutes.
First, we need to set up the simulation (coppeliaSim):
$FRAMEWORK_PATH
to the path for the directory containing the cloned repository)_.
git clone https://github.com/MathiasThor/CPG-RBFN-framework.git
VREP1
, VREP2
, VREP3
, VREP4
, etc. _(optional: set $VREP_WORKER_PATH
to the path for the directory containing the workers)_.remoteApiConnections.txt
in each of the VREP#
directories, change portIndex1_port
so that VREP1
has 19997
, VREP2
has 19996
, VREP3
has 19995
, VREP4
has 19994
, etc.libv_repExtRosInterface.so
into each of the worker directories from the utils directory.
cp $FRAMEWORK_PATH/CPG-RBFN-framework/utils/libv_repExtRosInterface.so $VREP_WORKER_PATH/VREP1/
cd $FRAMEWORK_PATH/CPG-RBFN-framework/
sudo apt install python3-pip
pip3 install -r requirements.txt
sudo apt-get install libgsl-dev
The neural controllers use ROS to communicate with coppeliaSim. Therefore, make sure that ros-xxx-desktop-full
(tested on melodic) is installed (ROS install guide)
roscore
cd $VREP_WORKER_PATH/VREP1/
./coppeliaSim.sh $FRAMEWORK_PATH/CPG-RBFN-framework/simulations/Advanced_env.ttt
cd $FRAMEWORK_PATH/CPG-RBFN-framework/interfaces/morf/sim/build_dir
rm CMakeCache.txt
cmake .
make
cd $FRAMEWORK_PATH/CPG-RBFN-framework/machine_learning
./run_sim.sh -t 400
The following will show how to start learning the base controller. It is in this way possible to reproduce the quantitative results in the manuscript.
roscore
cd $VREP_WORKER_PATH/VREP1/
./coppeliaSim.sh $FRAMEWORK_PATH/CPG-RBFN-framework/simulations/MORF_base_behavior.ttt
cd $VREP_WORKER_PATH/VREP2/
./coppeliaSim.sh $FRAMEWORK_PATH/CPG-RBFN-framework/simulations/MORF_base_behavior.ttt
cd $VREP_WORKER_PATH/VREP3/
./coppeliaSim.sh $FRAMEWORK_PATH/CPG-RBFN-framework/simulations/MORF_base_behavior.ttt
cd $VREP_WORKER_PATH/VREP4/
./coppeliaSim.sh $FRAMEWORK_PATH/CPG-RBFN-framework/simulations/MORF_base_behavior.ttt
cd $FRAMEWORK_PATH/CPG-RBFN-framework/interfaces/morf/sim/build_dir
rm CMakeCache.txt
cmake .
make
$FRAMEWORK_PATH/CPG-RBFN-framework/machine_learning/RL_master.py
set the following.
workers = 4
behaviour_selector = "walk"
./RL_repeater.sh -t 1 -e indirect -r MORF
Note that learning the advanced modules requires the user to set the behavior active (i.e., = 1) in neutronController.cpp
line 276, 281, 286, 291, or 296.
All software is available under the GPL-3 license.