ctu-vras / depth_correction

Self-Supervised Depth Correction of Lidar Measurements from Map Consistency Loss
BSD 3-Clause "New" or "Revised" License
11 stars 0 forks source link
lidar-point-cloud mapping self-supervised-learning

Self-Supervised Depth Correction of Lidar Measurements from Map Consistency Loss

RA-L Arxiv Slides Video

This repository is a ROS node that contains implementation of the following:

Contributions

Introduction

Point cloud maps acquired using consumer-level lidar still suffer from bias related to measuring scene surfaces with high incidence angle. A learnable method that refines lidar measurements based on the local shape of the measured surface. In particular, we introduce two novel point cloud map consistency losses, which facilitate self-supervised learning on real data of lidar depth correction models. Complementary to the removal of the bias from lidar measurements, we demonstrate that the depth correction models help to reduce localization drift. Please refer to the paper on arXiv for more details.

Topics

Parameters

Training Pipeline

LiDAR measurements correction models trained in a self-supervised manner on real data from diverse environments. Models exploit multiple point cloud measurements of the same scene from different view-points in order to reduce the bias based on the consistency of the constructed map.

Installation

Please, follow the installation instructions, provided in docs/install.md

Usage

Running depth correction node to refine point clouds used futher as input to SLAM:

roslaunch depth_correction slam_eval.launch depth_correction:=true model_class:=ScaledPolynomial model_state_dict:=/path/to/model_state_dict.pth play:=true bag:=/path/to/bag/file/name.bag rviz:=true

Launch training of depth correction models:

roslaunch depth_correction train_demo.launch rviz:=true

Datasets

For models evaluation we utilize and provide training pipeline on the following publicly available datasets:

In addition, we provide our dataset (FEE Corridor) that contains point cloud data captured in indorr environment with precise localization and ground truth mapping information.

Download the dataset from http://ptak.felk.cvut.cz/vras/data/fee_corridor/fee_corridor.zip and put it to the ./data folder (50 Gb). It exhibits the following structure:

fee_corridor
├── bags
├── maps
│   ├── blk
│   ├── e57
│   ├── npz
│   ├── pcd
│   ├── ptx
│   └── vtk
└── sequences
    ├── seq1
    │   ├── calibration
    │   ├── ouster_points
    │   ├── poses
    │   └── static_ouster_points
    └── seq2
        ├── calibration
        ├── ouster_points
        ├── poses
        └── static_ouster_points

There the map folder containes ground truth scans captured with the Leica BLK360 scanner. Two "stop-and-go" data sqeuences are provided of a robot with mounter Ouster OS0-128 lidar moving in the same environment. Individual scans from static robot positions are recorded in static_ouster_points folders as *.npz or *.bin file formats. Ground truth robot poses are obtained using norlab-icp-mapper lidar SLAM with alignment to prerecorded ground truth map. Additionally robot poses are recorded with the help of Leica Tracker.

To explore the data, simply run:

python -m depth_correction.datasets.fee_corridor

Optimization Example

Point cloud scans correction with ICP-like point-to-plane distance as the loss function.

# import necessary libraries
import torch
from depth_correction.datasets.fee_corridor import Dataset, dataset_names
from depth_correction.depth_cloud import DepthCloud
from depth_correction.model import ScaledPolynomial
from depth_correction.preproc import filtered_cloud
from depth_correction.config import Config
from depth_correction.loss import point_to_plane_dist

# in this example we use collected indoor dataset
ds = Dataset(name=dataset_names[0])

# setup data and optimization parameters
cfg = Config()
cfg.lr = 0.001

# define model for scans correction: depth correction term in the example d' = w * gamma^4
model = ScaledPolynomial(w=[0.0], exponent=[4])
optimizer = torch.optim.Adam(model.parameters(), lr=cfg.lr)

# the two neighboring scans are used
points1_struct, pose1 = ds[0]
points2_struct, pose2 = ds[1]

# construct depth cloud objects from points
cloud1 = DepthCloud.from_structured_array(points1_struct, dtype=cfg.numpy_float_type())
cloud2 = DepthCloud.from_structured_array(points2_struct, dtype=cfg.numpy_float_type())

# apply grid and depth filters to clouds
cloud1 = filtered_cloud(cloud1, cfg)
cloud2 = filtered_cloud(cloud2, cfg)

# transform point clouds to the same world coordinate frame
cloud1 = cloud1.transform(torch.as_tensor(pose1, dtype=cfg.torch_float_type()))
cloud2 = cloud2.transform(torch.as_tensor(pose2, dtype=cfg.torch_float_type()))

# compute cloud features necessary for optimization (like normals and incidence angles)
cloud1.update_all(r=cfg.nn_r)
cloud2.update_all(r=cfg.nn_r)

# run optimization loop
for i in range(cfg.n_opt_iters):
    cloud1_corr = model(cloud1)
    cloud2_corr = model(cloud2)

    cloud1_corr.update_points()
    cloud2_corr.update_points()

    loss = point_to_plane_dist(clouds=[cloud1_corr, cloud2_corr])

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    print(model)
    print(loss)

Training of the Models

Evaluation of baselines.

Compute map consistency metrics on raw data (not corrected with the models):

python -m depth_correction.main --dataset fee_corridor --min-depth 1.0 --max-depth 25.0 --grid-res 0.2 --nn-r 0.4 -- eval_loss_baselines

Estimate localization accuracy with norlab_icp_mapper using ground truth from a dataset:

python -m depth_correction.main --dataset fee_corridor --min-depth 1.0 --max-depth 25.0 --grid-res 0.2 --nn-r 0.4 --ros-master-port 12311 --rviz true -- eval_slam_baselines

Training of depth correction models and evaluation of them on map consistency and localization accuracy (whole pipeline):

python -m depth_correction.main --dataset fee_corridor --min-depth 1.0 --max-depth 25.0 --grid-res 0.2 --nn-r 0.4 -- train_and_eval_all

Citation

Feel free to cite the work if you find it relevant to your research.

@misc{https://doi.org/10.48550/arxiv.2303.01123,
  doi = {10.48550/ARXIV.2303.01123},
  url = {https://arxiv.org/abs/2303.01123},
  author = {Agishev, Ruslan and Pětříček, Tomáš and Zimmermann, Karel},
  keywords = {Robotics (cs.RO), FOS: Computer and information sciences, FOS: Computer and information sciences, 68T40},
  title = {Self-Supervised Depth Correction of Lidar Measurements from Map Consistency Loss},
  publisher = {arXiv},
  year = {2023},
  copyright = {arXiv.org perpetual, non-exclusive license}
}
@ARTICLE{10158328,
  author={Agishev, Ruslan and Petříček, Tomáš and Zimmermann, Karel},
  journal={IEEE Robotics and Automation Letters}, 
  title={Self-Supervised Depth Correction of Lidar Measurements From Map Consistency Loss}, 
  year={2023},
  volume={8},
  number={8},
  pages={4681-4688},
  doi={10.1109/LRA.2023.3287791}
}