ethz-asl / background_foreground_segmentation

12 stars 2 forks source link

Self-Improving Semantic Perception on a Construction Robot

[arxiv] [video]

This repository contains the code used on our self-improving robot, as well as links to all the data and configs to reproduce our results.

Installation

The software was developed under ubuntu 18.04 and ROS melodic. It is organized as a hybrid ROS and python workspace. Localisation experiments and online learning are running in the ROS workspace. Segmentation training and evaluation are running in the python-only workspace. Please follow the respective instructions below to set up the different workspaces.

Installing the ROS workspace

First, install ROS melodic according to [this]. Then, setup a catkin workspace:

mkdir ~/catkin_ws
export ROS_VERSION=melodic
export CATKIN_WS=~/catkin_ws
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS
catkin init
catkin config --merge-devel # Necessary for catkin_tools >= 0.4.
catkin config --extend /opt/ros/$ROS_VERSION
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release
cd src
git clone https://github.com/ethz-asl/background_foreground_segmentation.git

You can use wstool or some other ROS dependency manager to install all packages from dependencies.rosinstall, e.g.

cd $CATKIN_WS/src
wstool init
wstool merge background_foreground_segmentation/dependencies.rosinstall
wstool update

Finally, build and source the packages:

cd $CATKIN_WS/src
catkin build background_foreground_segmentation
source $CATKIN_WS/devel/setup.bash

You can optionally add the workspace to your .bashrc, i.e.

echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc

Installing the python workspace

Create virtualenv

Use your favourite way to create a python environment. We recommend one of these:

mkvirtualenv background_foreground_segmentation --python=$(which python3)
python3 -m venv py3_venv

Install dependencies

Letting $BFSEG_ROOT be the folder of this repo (i.e., where this README is located), assuming the virtualenv created above is always sourced:

Load Datasets

For training and evaluation, some datasets are required. We use TFDS to automatically download and extract these datasets. This will require around 50GB and can take a couple of hours to prepare.

cd $BFSEG_ROOT
python data_setup.py

Configure Settings

create a file src/bfseg/settings.py with the following content:

TMPDIR = '/tmp'  # a temporary storage folder
EXPERIMENT_STORAGE_FOLDER = '<insert here>' # a folder where training logs should be stored

Reproducing Experiments

Experimental setup

We perform experimentation on a mobile robot that we deploy in three different environments, i.e. a garage, an office, and a construction site. The robot is equipped with a 3D LiDAR sensor and three cameras in a 360° horizontal configuration. For reference, the robot's ground-truth position is tracked using a prism and an external tracking station.

image of sensors

Sensor setup on mobile robot

Rosbag Download

Each localisation experiment requires a different bagfile with recorded sensor readings. Since each bagfile has 20-30 GB, we do not have a script that downloads all bagfiles in advance. Please download the bagfiles you need to a directory of choice and either

Use an example bagfile (Garage1): download bagfile

Localisation without Segmentation

roslaunch background_foreground_segmentation pickelhaube_full_garage1.launch

Localisation with NYU-pretrained weights

roslaunch background_foreground_segmentation pickelhaube_nyusegmentation_garage1.launch

Localisation with Segmentation model trained in same environment (not on same dataset)

roslaunch background_foreground_segmentation pickelhaube_segmentation_garage1.launch

Localisation with Segmentation model trained first on Office and then on Garage

localising in the Garage on a different trajectory:

roslaunch background_foreground_segmentation crossdomain_nyutoofficetocla_garage3.launch

localising in the Office (to measure forgetting):

roslaunch background_foreground_segmentation crossdomain_nyutoofficetocla_office7.launch

Evaluation of Localised Trajectories

When running the experiments as above, poses from the ground-truth (leica) and the robot (icp) are recorded in the $BFSEG_ROOT/logs directory. To get the localisation accuracy, use the following script to interpolate the leica pose to the timestamps of the icp localisation:

import pandas as pd
import numpy as np
import scipy as sp
import scipy.interpolate
import scipy.spatial.transform as stf
import matplotlib.pyplot as plt

icp = pd.read_csv('logs/pickelhaube_full_garage1_icp_<set correct number>.csv')
leica = pd.read_csv('logs/pickelhaube_full_garage1_leica_<set correct number>.csv')
plt.figure()
plt.plot(icp['trans_x'], icp['trans_y'])
plt.plot(-leica['trans_y'], leica['trans_x'])
plt.show()
# interpolate with calibrated time-offset
interpolated_gt_x = sp.interpolate.interp1d(
    leica['headerstamp']  + 4.5e8, leica['aligned_x'],
    bounds_error=False, fill_value=np.nan)
interpolated_gt_y = sp.interpolate.interp1d(
    leica['headerstamp']  + 4.5e8, leica['aligned_y'],
    bounds_error=False, fill_value=np.nan)

icp['gt_trans_x'] = interpolated_gt_x(icp['headerstamp'])
icp['gt_trans_y'] = interpolated_gt_y(icp['headerstamp'])
icp['rmse_xy'] = np.sqrt(
    np.square(icp['trans_x'] - icp['gt_trans_x']) +
    np.square(icp['trans_y'] - icp['gt_trans_y']))
icp.plot('headerstamp', 'rmse_xy')
print('Mean: {:.3f}, Median: {:.3f}, Std: {:.3f}'.format(icp['rmse_xy'].mean(), icp['rmse_xy'].median(), icp['rmse_xy'].std()))

Getting the Pseudolabels

Each run with pickelhaube_full_*.launch will automatically store a dataset of pseudolabels in $BFSEG_ROOT/logs/pickelhaube_full*/. You can load the dataset into tensorflow like this:

from bfseg.data.fsdata import load_fsdata
import tensorflow as tf
import os
import json
datapath = '$BFSEG_ROOT/logs/pickelhaube_full*/'
if not os.path.exists(os.path.join(datapath, 'dataset_info.json')):
    with open(os.path.join(datapath, 'dataset_info.json'), 'w') as f:
        json.dump({
            'output_shapes': {'rgb': [None, None, 3], 'labels': [None, None]},
            'output_types': {'rgb': 'float32', 'labels': 'int32'}
        }, f)
ds = load_fsdata(datapath)
# visualize
import matplotlib.pyplot as plt
for blob in ds.shard(10, 0).take(10).as_numpy_iterator():
    _, axs = plt.subplots(1, 2, figsize=(10, 5))
    axs[0].imshow(blob['rgb'] / 255)
    axs[1].imshow(blob['labels'])

Alternatively, you can load already created pseudolabels that we used for our experiments:

import bfseg.data.pseudolabels
import bfseg.data.meshdist.bfseg_cla_meshdist_labels
import tensorflow_datasets as tfds

ds = tfds.load('meshdist_pseudolabels', split='rumlang3')  # for construction
ds = tfds.load('meshdist_pseudolabels', split='office5')  # for office
ds = tfds.load('bfseg_cla_meshdist_labels')  # for garage

Running Trainings

We use the sacred library to organize our training scripts and log results. You can start a training e.g. with

python src/train_binary_segmodel_base.py with /path/to/config.yaml --name "my training"

The results like model snapshots and metrics are then stored in the directory set in src/bfseg/settings.py (see Installation).

You may also use the pre-trained models or the config files used to obtain these (see pretrained_models_and_configs.md).

Online Learning

The paper experiment was conducted on bagfile Rumlang1.

For online learning we rely on simultaneously executing nodes in python2 and python3, so in case you encounter import errors make sure to install the dependencies to both python versions.

roslaunch background_foreground_segmentation pickelhaube_online_learning_rumlang1.launch

Software Overview

The repository is structured in the following way:

Dataset Creator

Overview

This package extracts the following information from a ROS Bag:

Continual learning

Overview

Dataset

Methods

We implement the following methods for continual learning: