Abstract
Localization in a dynamic environment suffers from moving objects. Removing dynamic object is crucial in this situation but become tricky when ego-motion is coupled. In this paper, instead of proposing a new slam framework, we aim at a more general strategy for a localization scenario. In that case, Dynamic Registration is available for integrating with any lidar slam system. We utilize 3D object detection to obtain potential moving objects and remove them temporarily. Then we proposed Dynamic Registration, to iteratively estimate ego-motion and segment moving objects until no static object generates. Static objects are merged with the environment. Finally, we successfully segment dynamic objects, static environments with static objects, and ego-motion estimation in a dynamic environment. We evaluate the performance of our proposed method on KITTI Tracking datasets. Results show stable and consistent improvements based on other classical registration algorithms.
The Revisiting Problem in Simultaneous Localization and Mapping: A Survey on Visual Loop Closure Detection
Authors: Konstantinos A. Tsintotas, Loukas Bampis, Antonios Gasteratos
Abstract
Where am I? This is one of the most critical questions that any intelligent system should answer to decide whether it navigates to a previously visited area. This problem has long been acknowledged for its challenging nature in simultaneous localization and mapping (SLAM), wherein the robot needs to correctly associate the incoming sensory data to the database allowing consistent map generation. The significant advances in computer vision achieved over the last 20 years, the increased computational power, and the growing demand for long-term exploration contributed to efficiently performing such a complex task with inexpensive perception sensors. In this article, visual loop closure detection, which formulates a solution based solely on appearance input data, is surveyed. We start by briefly introducing place recognition and SLAM concepts in robotics. Then, we describe a loop closure detection system's structure, covering an extensive collection of topics, including the feature extraction, the environment representation, the decision-making step, and the evaluation process. We conclude by discussing open and new research challenges, particularly concerning the robustness in dynamic environments, the computational complexity, and scalability in long-term operations. The article aims to serve as a tutorial and a position paper for newcomers to visual loop closure detection.
Keyword: Visual inertial
There is no result
Keyword: livox
There is no result
Keyword: loam
There is no result
Keyword: Visual inertial odometry
There is no result
Keyword: lidar
Building Change Detection using Multi-Temporal Airborne LiDAR Data
Authors: Ritu Yadav, Andrea Nascetti, Yifang Ban
Subjects: Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
Abstract
Building change detection is essential for monitoring urbanization, disaster assessment, urban planning and frequently updating the maps. 3D structure information from airborne light detection and ranging (LiDAR) is very effective for detecting urban changes. But the 3D point cloud from airborne LiDAR(ALS) holds an enormous amount of unordered and irregularly sparse information. Handling such data is tricky and consumes large memory for processing. Most of this information is not necessary when we are looking for a particular type of urban change. In this study, we propose an automatic method that reduces the 3D point clouds into a much smaller representation without losing the necessary information required for detecting Building changes. The method utilizes the Deep Learning(DL) model U-Net for segmenting the buildings from the background. Produced segmentation maps are then processed further for detecting changes and the results are refined using morphological methods. For the change detection task, we used multi-temporal airborne LiDAR data. The data is acquired over Stockholm in the years 2017 and 2019. The changes in buildings are classified into four types: 'newly built', 'demolished', 'taller' and 'shorter'. The detected changes are visualized in one map for better interpretation.
Dynamic Registration: Joint Ego Motion Estimation and 3D Moving Object Detection in Dynamic Environment
Authors: Wenyu Li, Xinyu Zhang, Zijun Wang, Shichun Guo, Nan Qiu, Jun Li
Abstract
Localization in a dynamic environment suffers from moving objects. Removing dynamic object is crucial in this situation but become tricky when ego-motion is coupled. In this paper, instead of proposing a new slam framework, we aim at a more general strategy for a localization scenario. In that case, Dynamic Registration is available for integrating with any lidar slam system. We utilize 3D object detection to obtain potential moving objects and remove them temporarily. Then we proposed Dynamic Registration, to iteratively estimate ego-motion and segment moving objects until no static object generates. Static objects are merged with the environment. Finally, we successfully segment dynamic objects, static environments with static objects, and ego-motion estimation in a dynamic environment. We evaluate the performance of our proposed method on KITTI Tracking datasets. Results show stable and consistent improvements based on other classical registration algorithms.
Keyword: loop detection
There is no result
Keyword: autonomous driving
Toward Policy Explanations for Multi-Agent Reinforcement Learning
Abstract
Advances in multi-agent reinforcement learning(MARL) enable sequential decision making for a range of exciting multi-agent applications such as cooperative AI and autonomous driving. Explaining agent decisions are crucial for improving system transparency, increasing user satisfaction, and facilitating human-agent collaboration. However, existing works on explainable reinforcement learning mostly focus on the single-agent setting and are not suitable for addressing challenges posed by multi-agent environments. We present novel methods to generate two types of policy explanations for MARL: (i) policy summarization about the agent cooperation and task sequence, and (ii) language explanations to answer queries about agent behavior. Experimental results on three MARL domains demonstrate the scalability of our methods. A user study shows that the generated explanations significantly improve user performance and increase subjective ratings on metrics such as user satisfaction.
Dataset for Robust and Accurate Leading Vehicle Velocity Recognition
Abstract
Recognition of the surrounding environment using a camera is an important technology in Advanced Driver-Assistance Systems and Autonomous Driving, and recognition technology is often solved by machine learning approaches such as deep learning in recent years. Machine learning requires datasets for learning and evaluation. To develop robust recognition technology in the real world, in addition to normal driving environment, data in environments that are difficult for cameras such as rainy weather or nighttime are essential. We have constructed a dataset that one can benchmark the technology, targeting the velocity recognition of the leading vehicle. This task is an important one for the Advanced Driver-Assistance Systems and Autonomous Driving. The dataset is available at https://signate.jp/competitions/657
Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame
Authors: Youngjoon Yu, Hong Joo Lee, Hakmin Lee, Yong Man Ro
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Object detection has attracted great attention in the computer vision area and has emerged as an indispensable component in many vision systems. In the era of deep learning, many high-performance object detection networks have been proposed. Although these detection networks show high performance, they are vulnerable to adversarial patch attacks. Changing the pixels in a restricted region can easily fool the detection network in the physical world. In particular, person-hiding attacks are emerging as a serious problem in many safety-critical applications such as autonomous driving and surveillance systems. Although it is necessary to defend against an adversarial patch attack, very few efforts have been dedicated to defending against person-hiding attacks. To tackle the problem, in this paper, we propose a novel defense strategy that mitigates a person-hiding attack by optimizing defense patterns, while previous methods optimize the model. In the proposed method, a frame-shaped pattern called a 'universal white frame' (UWF) is optimized and placed on the outside of the image. To defend against adversarial patch attacks, UWF should have three properties (i) suppressing the effect of the adversarial patch, (ii) maintaining its original prediction, and (iii) applicable regardless of images. To satisfy the aforementioned properties, we propose a novel pattern optimization algorithm that can defend against the adversarial patch. Through comprehensive experiments, we demonstrate that the proposed method effectively defends against the adversarial patch attack.
Keyword: mapping
Self-scalable Tanh (Stan): Faster Convergence and Better Generalization in Physics-informed Neural Networks
Abstract
Physics-informed Neural Networks (PINNs) are gaining attention in the engineering and scientific literature for solving a range of differential equations with applications in weather modeling, healthcare, manufacturing, and so on. Poor scalability is one of the barriers to utilizing PINNs for many real-world problems. To address this, a Self-scalable tanh (Stan) activation function is proposed for the PINNs. The proposed Stan function is smooth, non-saturating, and has a trainable parameter. During training, it can allow easy flow of gradients to compute the required derivatives and also enable systematic scaling of the input-output mapping. It is also shown theoretically that the PINN with the proposed Stan function has no spurious stationary points when using gradient descent algorithms. The proposed Stan is tested on a couple of numerical studies involving general regression problems. It is subsequently used for solving multiple forward problems, which involve second-order derivatives and multiple dimensions, and an inverse problem where the thermal diffusivity is predicted through heat conduction in a rod. Our results of these case studies establish empirically that the Stan activation function can achieve better training and more accurate predictions than the state-of-the-art activation functions.
A Scalable Combinatorial Solver for Elastic Geometrically Consistent 3D Shape Matching
Authors: Paul Roetzer, Paul Swoboda, Daniel Cremers, Florian Bernard
Subjects: Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Optimization and Control (math.OC)
Abstract
We present a scalable combinatorial algorithm for globally optimizing over the space of geometrically consistent mappings between 3D shapes. We use the mathematically elegant formalism proposed by Windheuser et al. (ICCV 2011) where 3D shape matching was formulated as an integer linear program over the space of orientation-preserving diffeomorphisms. Until now, the resulting formulation had limited practical applicability due to its complicated constraint structure and its large size. We propose a novel primal heuristic coupled with a Lagrange dual problem that is several orders of magnitudes faster compared to previous solvers. This allows us to handle shapes with substantially more triangles than previously solvable. We demonstrate compelling results on diverse datasets, and, even showcase that we can address the challenging setting of matching two partial shapes without availability of complete shapes. Our code is publicly available at this http URL .
The Revisiting Problem in Simultaneous Localization and Mapping: A Survey on Visual Loop Closure Detection
Authors: Konstantinos A. Tsintotas, Loukas Bampis, Antonios Gasteratos
Abstract
Where am I? This is one of the most critical questions that any intelligent system should answer to decide whether it navigates to a previously visited area. This problem has long been acknowledged for its challenging nature in simultaneous localization and mapping (SLAM), wherein the robot needs to correctly associate the incoming sensory data to the database allowing consistent map generation. The significant advances in computer vision achieved over the last 20 years, the increased computational power, and the growing demand for long-term exploration contributed to efficiently performing such a complex task with inexpensive perception sensors. In this article, visual loop closure detection, which formulates a solution based solely on appearance input data, is surveyed. We start by briefly introducing place recognition and SLAM concepts in robotics. Then, we describe a loop closure detection system's structure, covering an extensive collection of topics, including the feature extraction, the environment representation, the decision-making step, and the evaluation process. We conclude by discussing open and new research challenges, particularly concerning the robustness in dynamic environments, the computational complexity, and scalability in long-term operations. The article aims to serve as a tutorial and a position paper for newcomers to visual loop closure detection.
Elevation Mapping for Locomotion and Navigation using GPU
Abstract
Perceiving the surrounding environment is crucial for autonomous mobile robots. An elevation map provides a memory-efficient and simple yet powerful geometric representation for ground robots. The robots can use this information for navigation in an unknown environment or perceptive locomotion control over rough terrain. Depending on the application, various post processing steps may be incorporated, such as smoothing, inpainting or plane segmentation. In this work, we present an elevation mapping pipeline leveraging GPU for fast and efficient processing with additional features both for navigation and locomotion. We demonstrated our mapping framework through extensive hardware experiments. Our mapping software was successfully deployed for underground exploration during DARPA Subterranean Challenge and for various experiments of quadrupedal locomotion.
Keyword: localization
Dynamic Registration: Joint Ego Motion Estimation and 3D Moving Object Detection in Dynamic Environment
Authors: Wenyu Li, Xinyu Zhang, Zijun Wang, Shichun Guo, Nan Qiu, Jun Li
Abstract
Localization in a dynamic environment suffers from moving objects. Removing dynamic object is crucial in this situation but become tricky when ego-motion is coupled. In this paper, instead of proposing a new slam framework, we aim at a more general strategy for a localization scenario. In that case, Dynamic Registration is available for integrating with any lidar slam system. We utilize 3D object detection to obtain potential moving objects and remove them temporarily. Then we proposed Dynamic Registration, to iteratively estimate ego-motion and segment moving objects until no static object generates. Static objects are merged with the environment. Finally, we successfully segment dynamic objects, static environments with static objects, and ego-motion estimation in a dynamic environment. We evaluate the performance of our proposed method on KITTI Tracking datasets. Results show stable and consistent improvements based on other classical registration algorithms.
The Revisiting Problem in Simultaneous Localization and Mapping: A Survey on Visual Loop Closure Detection
Authors: Konstantinos A. Tsintotas, Loukas Bampis, Antonios Gasteratos
Abstract
Where am I? This is one of the most critical questions that any intelligent system should answer to decide whether it navigates to a previously visited area. This problem has long been acknowledged for its challenging nature in simultaneous localization and mapping (SLAM), wherein the robot needs to correctly associate the incoming sensory data to the database allowing consistent map generation. The significant advances in computer vision achieved over the last 20 years, the increased computational power, and the growing demand for long-term exploration contributed to efficiently performing such a complex task with inexpensive perception sensors. In this article, visual loop closure detection, which formulates a solution based solely on appearance input data, is surveyed. We start by briefly introducing place recognition and SLAM concepts in robotics. Then, we describe a loop closure detection system's structure, covering an extensive collection of topics, including the feature extraction, the environment representation, the decision-making step, and the evaluation process. We conclude by discussing open and new research challenges, particularly concerning the robustness in dynamic environments, the computational complexity, and scalability in long-term operations. The article aims to serve as a tutorial and a position paper for newcomers to visual loop closure detection.
Keyword: SLAM
Dynamic Registration: Joint Ego Motion Estimation and 3D Moving Object Detection in Dynamic Environment
The Revisiting Problem in Simultaneous Localization and Mapping: A Survey on Visual Loop Closure Detection
Keyword: Visual inertial
There is no result
Keyword: livox
There is no result
Keyword: loam
There is no result
Keyword: Visual inertial odometry
There is no result
Keyword: lidar
Building Change Detection using Multi-Temporal Airborne LiDAR Data
Dynamic Registration: Joint Ego Motion Estimation and 3D Moving Object Detection in Dynamic Environment
Keyword: loop detection
There is no result
Keyword: autonomous driving
Toward Policy Explanations for Multi-Agent Reinforcement Learning
Dataset for Robust and Accurate Leading Vehicle Velocity Recognition
Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame
Keyword: mapping
Self-scalable Tanh (Stan): Faster Convergence and Better Generalization in Physics-informed Neural Networks
A Scalable Combinatorial Solver for Elastic Geometrically Consistent 3D Shape Matching
The Revisiting Problem in Simultaneous Localization and Mapping: A Survey on Visual Loop Closure Detection
Elevation Mapping for Locomotion and Navigation using GPU
Keyword: localization
Dynamic Registration: Joint Ego Motion Estimation and 3D Moving Object Detection in Dynamic Environment
The Revisiting Problem in Simultaneous Localization and Mapping: A Survey on Visual Loop Closure Detection