Paul-Edouard Sarlin*
·
Mihai Dusmanu*
Johannes L. Schönberger
·
Pablo Speciale
·
Lukas Gruber
·
Viktor Larsson
·
Ondrej Miksik
·
Marc Pollefeys
LaMAR includes multi-sensor streams recorded by AR devices along hundreds of unconstrained trajectories captured over 2 years in 3 large indoor+outdoor locations.
This repository hosts the source code for LaMAR, a new benchmark for localization and mapping with AR devices in realistic conditions. The contributions of this work are:
See our ECCV 2022 tutorial for an overview of LaMAR and of the state of the art of localization and mapping for AR.
This codebase is composed of the following modules:
lamar
: evaluation pipeline and baselines for localization and mappingscantools
: data API, processing tools and pipelineWe introduce a new data format, called Capture, to handle multi-session and multi-sensor data recorded by different devices. A Capture object corresponds to a capture location. It is composed of multiple sessions and each of them corresponds to a data recording by a given device. Each sessions stores the raw sensor data, calibration, poses, and all assets generated during the processing.
from scantools.capture import Capture
capture = Capture.load('data/CAB/')
print(capture.sessions.keys())
session = capture.sessions[session_id] # each session has a unique id
print(session.sensors.keys()) # each sensor has a unique id
print(session.rigs) # extrinsic calibration between sensors
keys = session.trajectories.key_pairs() # all (timestamp, sensor_or_rig_id)
T_w_i = sessions.trajectories[keys[0]] # first pose, from sensor/rig to world
More details are provided in the specification document CAPTURE.md
.
:one: Install the core dependencies:
:two: Install the LaMAR libraries and pull the remaining pip dependencies:
python -m pip install -e .
:three: Optional: the processing pipeline additionally relies on heavier dependencies not required for benchmarking:
python -m pip install -e .[scantools]
:four: Optional: if you wish to contribute, install the development tools as well:
python -m pip install -e .[dev]
The Dockerfile provided in this project has multiple stages, two of which are:
scantools
and lamar
.
You can build the Docker images for these stages using the following commands:
# Build the 'scantools' stage
docker build --target scantools -t lamar:scantools -f Dockerfile ./
# Build the 'lamar' stage
docker build --target lamar -t lamar:lamar -f Dockerfile ./
Alternatively, if you don't want to build the images yourself, you can pull them from the GitHub Docker Registry using the following commands:
# Pull the 'scantools' image
docker pull ghcr.io/microsoft/lamar-benchmark/scantools:latest
# Pull the 'lamar' image
docker pull ghcr.io/microsoft/lamar-benchmark/lamar:latest
:one: Obtain the evaluation data: visit the dataset page and place the 3 scenes in ./data
:
data/
├── CAB/
│ └── sessions/
│ ├── map/ # mapping session
│ ├── query_hololens/ # HoloLens test queries
│ ├── query_phone/ # Phone test queries
│ ├── query_val_hololens/ # HoloLens validation queries
│ └── query_val_phone/ # Phone validation queries
├── HGE
│ └── ...
└── LIN
└── ...
Each scene contains a mapping session and queries for each device type. We provide a small set of validation queries with known ground-truth poses such that they can be used for developing algorithms and tuning parameters. We keep private the ground-truth poses of the test queries.
:two: Run the single-frame evaluation with the strongest baseline:
python -m lamar.run \
--scene $SCENE --ref_id map --query_id $QUERY_ID \
--retrieval fusion --feature superpoint --matcher superglue
where $SCENE
is in {CAB,HGE,LIN}
and $QUERY_ID
is in {query_phone,query_hololens}
for testing and in {query_val_phone,query_val_hololens}
for validation. All outputs are written to ./outputs/
by default. For example, to localize validation Phone queries in the CAB scene:
python -m lamar.run \
--scene CAB --ref_id map --query_id query_val_phone \
--retrieval fusion --feature superpoint --matcher superglue
This executes two steps:
:three: Obtain the evaluation results:
:four: Workflow: the benchmarking pipeline is designed such that
lamar/tasks/
Each step of the pipeline corresponds to a runfile in scantools/run_*.py
that can be used as follow:
python -m scantools.run_phone_to_capture [--args]
from scantools import run_phone_to_capture
run_phone_to_capture.run(...)
We provide pipeline scripts that execute all necessary steps:
pipelines/pipeline_scans.py
aligns multiple NavVis sessions and merge them into a unique reference sessionpipelines/pipeline_sequence.py
aligns all AR sequences to the reference sessionThe raw data will be released soon such that anyone is able to run the processing pipeline without access to capture devices.
Here are runfiles that could be handy for importing and exporting data:
run_phone_to_capture
: convert a ScanCapture recording into a Capture sessionrun_navvis_to_capture
: convert a NavVis recording into a Capture Sessionrun_session_to_kapture
: convert a Capture session into a Kapture instancerun_capture_to_empty_colmap
: convert a Capture session into an empty COLMAP modelrun_image_anonymization
: anonymize faces and license plates using the Brighter.AI APIrun_radio_anonymization
: anonymize radio signal IDsrun_combine_sequences
: combine multiple sequence sessions into a single sessionrun_qrcode_detection
: detect QR codes in images and store their posesWe also release the raw original data, as recorded by the devices (HoloLens, phones, NavVis scanner), with minimal post-processing.
Like the evaluation data, the raw data is accessed through the dataset page.
More details are provided in the specification document RAW-DATA.md
.
We are still in the process of fully releasing LaMAR. Here is the release plan:
Please consider citing our work if you use any code from this repo or ideas presented in the paper:
@inproceedings{sarlin2022lamar,
author = {Paul-Edouard Sarlin and
Mihai Dusmanu and
Johannes L. Schönberger and
Pablo Speciale and
Lukas Gruber and
Viktor Larsson and
Ondrej Miksik and
Marc Pollefeys},
title = {{LaMAR: Benchmarking Localization and Mapping for Augmented Reality}},
booktitle = {ECCV},
year = {2022},
}
Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the Creative Commons Attribution 4.0 International Public License, see the LICENSE file, and grant you a license to any code in the repository under the MIT License, see the LICENSE-CODE file.
Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.
Privacy information can be found at https://privacy.microsoft.com/en-us/
Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel or otherwise.