Abstract
Self-driving cars must detect vehicles, pedestrians, and other traffic participants accurately to operate safely. Small, far-away, or highly occluded objects are particularly challenging because there is limited information in the LiDAR point clouds for detecting them. To address this challenge, we leverage valuable information from the past: in particular, data collected in past traversals of the same scene. We posit that these past data, which are typically discarded, provide rich contextual information for disambiguating the above-mentioned challenging cases. To this end, we propose a novel, end-to-end trainable Hindsight framework to extract this contextual information from past traversals and store it in an easy-to-query data structure, which can then be leveraged to aid future 3D object detection of the same scene. We show that this framework is compatible with most modern 3D detection architectures and can substantially improve their average precision on multiple autonomous driving datasets, most notably by more than 300% on the challenging cases.
TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers
Abstract
LiDAR and camera are two important sensors for 3D object detection in autonomous driving. Despite the increasing popularity of sensor fusion in this field, the robustness against inferior image conditions, e.g., bad illumination and sensor misalignment, is under-explored. Existing fusion methods are easily affected by such conditions, mainly due to a hard association of LiDAR points and image pixels, established by calibration matrices. We propose TransFusion, a robust solution to LiDAR-camera fusion with a soft-association mechanism to handle inferior image conditions. Specifically, our TransFusion consists of convolutional backbones and a detection head based on a transformer decoder. The first layer of the decoder predicts initial bounding boxes from a LiDAR point cloud using a sparse set of object queries, and its second decoder layer adaptively fuses the object queries with useful image features, leveraging both spatial and contextual relationships. The attention mechanism of the transformer enables our model to adaptively determine where and what information should be taken from the image, leading to a robust and effective fusion strategy. We additionally design an image-guided query initialization strategy to deal with objects that are difficult to detect in point clouds. TransFusion achieves state-of-the-art performance on large-scale datasets. We provide extensive experiments to demonstrate its robustness against degenerated image quality and calibration errors. We also extend the proposed method to the 3D tracking task and achieve the 1st place in the leaderboard of nuScenes tracking, showing its effectiveness and generalization capability.
Keyword: loop detection
There is no result
Keyword: autonomous driving
Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception
Authors: Yurong You, Katie Z Luo, Xiangyu Chen, Junan Chen, Wei-Lun Chao, Wen Sun, Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Self-driving cars must detect vehicles, pedestrians, and other traffic participants accurately to operate safely. Small, far-away, or highly occluded objects are particularly challenging because there is limited information in the LiDAR point clouds for detecting them. To address this challenge, we leverage valuable information from the past: in particular, data collected in past traversals of the same scene. We posit that these past data, which are typically discarded, provide rich contextual information for disambiguating the above-mentioned challenging cases. To this end, we propose a novel, end-to-end trainable Hindsight framework to extract this contextual information from past traversals and store it in an easy-to-query data structure, which can then be leveraged to aid future 3D object detection of the same scene. We show that this framework is compatible with most modern 3D detection architectures and can substantially improve their average precision on multiple autonomous driving datasets, most notably by more than 300% on the challenging cases.
TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers
Abstract
LiDAR and camera are two important sensors for 3D object detection in autonomous driving. Despite the increasing popularity of sensor fusion in this field, the robustness against inferior image conditions, e.g., bad illumination and sensor misalignment, is under-explored. Existing fusion methods are easily affected by such conditions, mainly due to a hard association of LiDAR points and image pixels, established by calibration matrices. We propose TransFusion, a robust solution to LiDAR-camera fusion with a soft-association mechanism to handle inferior image conditions. Specifically, our TransFusion consists of convolutional backbones and a detection head based on a transformer decoder. The first layer of the decoder predicts initial bounding boxes from a LiDAR point cloud using a sparse set of object queries, and its second decoder layer adaptively fuses the object queries with useful image features, leveraging both spatial and contextual relationships. The attention mechanism of the transformer enables our model to adaptively determine where and what information should be taken from the image, leading to a robust and effective fusion strategy. We additionally design an image-guided query initialization strategy to deal with objects that are difficult to detect in point clouds. TransFusion achieves state-of-the-art performance on large-scale datasets. We provide extensive experiments to demonstrate its robustness against degenerated image quality and calibration errors. We also extend the proposed method to the 3D tracking task and achieve the 1st place in the leaderboard of nuScenes tracking, showing its effectiveness and generalization capability.
Dense Residual Networks for Gaze Mapping on Indian Roads
Abstract
In the recent past, greater accessibility to powerful computational resources has enabled progress in the field of Deep Learning and Computer Vision to grow by leaps and bounds. This in consequence has lent progress to the domain of Autonomous Driving and Navigation Systems. Most of the present research work has been focused on driving scenarios in the European or American roads. Our paper draws special attention to the Indian driving context. To this effect, we propose a novel architecture, DR-Gaze, which is used to map the driver's gaze onto the road. We compare our results with previous works and state-of-the-art results on the DGAZE dataset. Our code will be made publicly available upon acceptance of our paper.
Transferring Multi-Agent Reinforcement Learning Policies for Autonomous Driving using Sim-to-Real
Abstract
Autonomous Driving requires high levels of coordination and collaboration between agents. Achieving effective coordination in multi-agent systems is a difficult task that remains largely unresolved. Multi-Agent Reinforcement Learning has arisen as a powerful method to accomplish this task because it considers the interaction between agents and also allows for decentralized training -- which makes it highly scalable. However, transferring policies from simulation to the real world is a big challenge, even for single-agent applications. Multi-agent systems add additional complexities to the Sim-to-Real gap due to agent collaboration and environment synchronization. In this paper, we propose a method to transfer multi-agent autonomous driving policies to the real world. For this, we create a multi-agent environment that imitates the dynamics of the Duckietown multi-robot testbed, and train multi-agent policies using the MAPPO algorithm with different levels of domain randomization. We then transfer the trained policies to the Duckietown testbed and compare the use of the MAPPO algorithm against a traditional rule-based method. We show that the rewards of the transferred policies with MAPPO and domain randomization are, on average, 1.85 times superior to the rule-based method. Moreover, we show that different levels of parameter randomization have a substantial impact on the Sim-to-Real gap.
Optical Flow Based Motion Detection for Autonomous Driving
Authors: Ka Man Lo
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Motion detection is a fundamental but challenging task for autonomous driving. In particular scenes like highway, remote objects have to be paid extra attention for better controlling decision. Aiming at distant vehicles, we train a neural network model to classify the motion status using optical flow field information as the input. The experiments result in high accuracy, showing that our idea is viable and promising. The trained model also achieves an acceptable performance for nearby vehicles. Our work is implemented in PyTorch. Open tools including nuScenes, FastFlowNet and RAFT are used. Visualization videos are available at https://www.youtube.com/playlist?list=PLVVrWgq4OrlBnRebmkGZO1iDHEksMHKGk .
Abstract
In this paper, we present a system to train driving policies from experiences collected not just from the ego-vehicle, but all vehicles that it observes. This system uses the behaviors of other agents to create more diverse driving scenarios without collecting additional data. The main difficulty in learning from other vehicles is that there is no sensor information. We use a set of supervisory tasks to learn an intermediate representation that is invariant to the viewpoint of the controlling vehicle. This not only provides a richer signal at training time but also allows more complex reasoning during inference. Learning how all vehicles drive helps predict their behavior at test time and can avoid collisions. We evaluate this system in closed-loop driving simulations. Our system outperforms all prior methods on the public CARLA Leaderboard by a wide margin, improving driving score by 25 and route completion rate by 24 points. Our method won the 2021 CARLA Autonomous Driving challenge. Demo videos are available at https://dotchen.github.io/LAV/.
Keyword: mapping
Gated Domain-Invariant Feature Disentanglement for Domain Generalizable Object Detection
Authors: Haozhuo Zhang, Huimin Yu, Yuming Yan, Runfa Wang
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
For Domain Generalizable Object Detection (DGOD), Disentangled Representation Learning (DRL) helps a lot by explicitly disentangling Domain-Invariant Representations (DIR) from Domain-Specific Representations (DSR). Considering the domain category is an attribute of input data, it should be feasible for networks to fit a specific mapping which projects DSR into feature channels exclusive to domain-specific information, and thus much cleaner disentanglement of DIR from DSR can be achieved simply on channel dimension. Inspired by this idea, we propose a novel DRL method for DGOD, which is termed Gated Domain-Invariant Feature Disentanglement (GDIFD). In GDIFD, a Channel Gate Module (CGM) learns to output channel gate signals close to either 0 or 1, which can mask out the channels exclusive to domain-specific information helpful for domain recognition. With the proposed GDIFD, the backbone in our framework can fit the desired mapping easily, which enables the channel-wise disentanglement. In experiments, we demonstrate that our approach is highly effective and achieves state-of-the-art DGOD performance.
Root-aligned SMILES for Molecular Retrosynthesis Prediction
Authors: Zipeng Zhong, Jie Song, Zunlei Feng, Tiantao Liu, Lingxiang Jia, Shaolun Liu, Min Wu, Tingjun Hou, Mingli Song
Subjects: Machine Learning (cs.LG); Chemical Physics (physics.chem-ph); Biomolecules (q-bio.BM)
Abstract
Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to discover precursor molecules that can be used to synthesize a target molecule. A popular paradigm of existing computational retrosynthesis methods formulate retrosynthesis prediction as a sequence-to-sequence translation problem, where the typical SMILES representations are adopted for both reactants and products. However, the general-purpose SMILES neglects the characteristics of retrosynthesis that 1) the search space of the reactants is quite huge, and 2) the molecular graph topology is largely unaltered from products to reactants, resulting in the suboptimal performance of SMILES if straightforwardly applied. In this article, we propose the root-aligned SMILES~(R-SMILES), which specifies a tightly aligned one-to-one mapping between the product and the reactant SMILES, to narrow the string representation discrepancy for more efficient retrosynthesis. As the minimum edit distance between the input and the output is significantly decreased with the proposed R-SMILES, the computational model is largely relieved from learning the complex syntax and dedicated to learning the chemical knowledge for retrosynthesis. We compare the proposed R-SMILES with various state-of-the-art baselines on different benchmarks and show that it significantly outperforms them all, demonstrating the superiority of the proposed method.
The neighbour sum distinguishing edge-weighting with local constraints
Authors: Antoine Dailly (UNAM, G-SCOP_OC), Elżbieta Sidorowicz
Abstract
A $k$-edge-weighting of $G$ is a mapping $\omega:E(G)\longrightarrow {1,\ldots,k}$. The edge-weighting naturally induces a vertex colouring $\sigma\omega:V(G)\longrightarrow \mathbb{N}$ given by $\sigma{\omega}(v)=\sum_{u\in NG(v)}\omega(vu)$ for every $v\in V(G)$. The edge-weighting $\omega$ is neighbour sum distinguishing if it yields a proper vertex colouring $\sigma{\omega}$, i.e., $\sigma{\omega}(u)\neq \sigma{\omega}(v)$ for every edge $uv$ of $G$.We investigate a neighbour sum distinguishing edge-weighting with local constraints, namely, we assume that the set of edges incident to a vertex of large degree is not monochromatic. The graph is nice if it has no components isomorphic to $K_2$. We prove that every nice graph with maximum degree at most~5 admits a neighbour sum distinguishing $(\Delta(G)+2)$-edge-weighting such that all the vertices of degree at least~2 are incident with at least two edges of different weights. Furthermore, we prove that every nice graph admits a neighbour sum distinguishing $7$-edge-weighting such that all the vertices of degree at least~6 are incident with at least two edges of different weights. Finally, we show that nice bipartite graphs admit a neighbour sum distinguishing $6$-edge-weighting such that all the vertices of degree at least~2 are incident with at least two edges of different weights.
BERT-ASC: Auxiliary-Sentence Construction for Implicit Aspect Learning in Sentiment Analysis
Authors: Ahmed Murtadha, Shengfeng Pan, Bo Wen, Jianlin Su, Wenze Zhang, Yunfeng Liu
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Abstract
Aspect-based sentiment analysis (ABSA) task aims to associate a piece of text with a set of aspects and meanwhile infer their respective sentimental polarities. Up to now, the state-of-the-art approaches are built upon fine-tuning of various pre-trained language models. They commonly aim to learn the aspect-specific representation in the corpus. Unfortunately, the aspect is often expressed implicitly through a set of representatives and thus renders implicit mapping process unattainable unless sufficient labeled examples. In this paper, we propose to jointly address aspect categorization and aspect-based sentiment subtasks in a unified framework. Specifically, we first introduce a simple but effective mechanism that collaborates the semantic and syntactic information to construct auxiliary-sentences for the implicit aspect. Then, we encourage BERT to learn the aspect-specific representation in response to the automatically constructed auxiliary-sentence instead of the aspect itself. Finally, we empirically evaluate the performance of the proposed solution by a comparative study on real benchmark datasets for both ABSA and Targeted-ABSA tasks. Our extensive experiments show that it consistently achieves state-of-the-art performance in terms of aspect categorization and aspect-based sentiment across all datasets and the improvement margins are considerable.
Monitoring and mapping of crop fields with UAV swarms based on information gain
Authors: Carlos Carbone, Dario Albani, Federico Magistri, Dimitri Ognibene, Cyrill Stachniss, Gert Kootstra, Daniele Nardi, Vito Trianni
Abstract
Monitoring crop fields to map features like weeds can be efficiently performed with unmanned aerial vehicles (UAVs) that can cover large areas in a short time due to their privileged perspective and motion speed. However, the need for high-resolution images for precise classification of features (e.g., detecting even the smallest weeds in the field) contrasts with the limited payload and ight time of current UAVs. Thus, it requires several flights to cover a large field uniformly. However, the assumption that the whole field must be observed with the same precision is unnecessary when features are heterogeneously distributed, like weeds appearing in patches over the field. In this case, an adaptive approach that focuses only on relevant areas can perform better, especially when multiple UAVs are employed simultaneously. Leveraging on a swarm-robotics approach, we propose a monitoring and mapping strategy that adaptively chooses the target areas based on the expected information gain, which measures the potential for uncertainty reduction due to further observations. The proposed strategy scales well with group size and leads to smaller mapping errors than optimal pre-planned monitoring approaches.
Cross-View Panorama Image Synthesis
Authors: Songsong Wu, Hao Tang, Xiao-Yuan Jing, Haifeng Zhao, Jianjun Qian, Nicu Sebe, Yan Yan
Subjects: Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
Abstract
In this paper, we tackle the problem of synthesizing a ground-view panorama image conditioned on a top-view aerial image, which is a challenging problem due to the large gap between the two image domains with different view-points. Instead of learning cross-view mapping in a feedforward pass, we propose a novel adversarial feedback GAN framework named PanoGAN with two key components: an adversarial feedback module and a dual branch discrimination strategy. First, the aerial image is fed into the generator to produce a target panorama image and its associated segmentation map in favor of model training with layout semantics. Second, the feature responses of the discriminator encoded by our adversarial feedback module are fed back to the generator to refine the intermediate representations, so that the generation performance is continually improved through an iterative generation process. Third, to pursue high-fidelity and semantic consistency of the generated panorama image, we propose a pixel-segmentation alignment mechanism under the dual branch discrimiantion strategy to facilitate cooperation between the generator and the discriminator. Extensive experimental results on two challenging cross-view image datasets show that PanoGAN enables high-quality panorama image generation with more convincing details than state-of-the-art approaches. The source code and trained models are available at \url{https://github.com/sswuai/PanoGAN}.
Keyword: localization
Using Evolutionary Coupling to Establish Relevance Links Between Tests and Code Units. A case study on fault localization
Abstract
Many software engineering techniques, such as fault localization, operate based on relevance relationships between tests and code. These relationships are often inferred through the use of dynamic test execution information (test execution traces) that approximate the link between relevant code units and asserted, by the tests, program behaviour. Unfortunately, in practice dynamic information is not always available due to the overheads introduced by the instrumentation or the nature of the production environments. To deal with this issue, we propose CEMENT, a static technique that automatically infers such test and code relationships given the projects' evolution. The key idea is that developers make relevant changes on test and code units at the same period of time, i.e., co-evolution of tests and code units reflects a probable link between them. We evaluate CEMENT on 15 open source projects and show that it indeed captures relevant links. Additionally, we perform a fault localization case study where we compare CEMENT with an existing Information Retrieval-based Fault Localization (IRFL) technique and show that it achieves comparable performance. A further analysis of our results reveals a small overlap between the faults successfully localized by the two approaches suggesting complementarity. In particular, out of the 39 successfully localized faults, two are common while CEMENT and IRFL localize 16 and 21. These results demonstrate that test and code evolutionary coupling can effectively support test and debugging activities.
Audio visual character profiles for detecting background characters in entertainment media
Authors: Rahul Sharma, Shrikanth Narayanan
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
An essential goal of computational media intelligence is to support understanding how media stories -- be it news, commercial or entertainment media -- represent and reflect society and these portrayals are perceived. People are a central element of media stories. This paper focuses on understanding the representation and depiction of background characters in media depictions, primarily movies and TV shows. We define the background characters as those who do not participate vocally in any scene throughout the movie and address the problem of localizing background characters in videos. We use an active speaker localization system to extract high-confidence face-speech associations and generate audio-visual profiles for talking characters in a movie by automatically clustering them. Using a face verification system, we then prune all the face-tracks which match any of the generated character profiles and obtain the background character face-tracks. We curate a background character dataset which provides annotations for background character for a set of TV shows, and use it to evaluate the performance of the background character detection framework.
Ray3D: ray-based 3D human pose estimation for monocular absolute 3D localization
Abstract
In this paper, we propose a novel monocular ray-based 3D (Ray3D) absolute human pose estimation with calibrated camera. Accurate and generalizable absolute 3D human pose estimation from monocular 2D pose input is an ill-posed problem. To address this challenge, we convert the input from pixel space to 3D normalized rays. This conversion makes our approach robust to camera intrinsic parameter changes. To deal with the in-the-wild camera extrinsic parameter variations, Ray3D explicitly takes the camera extrinsic parameters as an input and jointly models the distribution between the 3D pose rays and camera extrinsic parameters. This novel network design is the key to the outstanding generalizability of Ray3D approach. To have a comprehensive understanding of how the camera intrinsic and extrinsic parameter variations affect the accuracy of absolute 3D key-point localization, we conduct in-depth systematic experiments on three single person 3D benchmarks as well as one synthetic benchmark. These experiments demonstrate that our method significantly outperforms existing state-of-the-art models. Our code and the synthetic dataset are available at https://github.com/YxZhxn/Ray3D .
Keyword: SLAM
There is no result
Keyword: Visual inertial
There is no result
Keyword: livox
There is no result
Keyword: loam
There is no result
Keyword: Visual inertial odometry
There is no result
Keyword: lidar
Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception
TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers
Keyword: loop detection
There is no result
Keyword: autonomous driving
Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception
TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers
Dense Residual Networks for Gaze Mapping on Indian Roads
Transferring Multi-Agent Reinforcement Learning Policies for Autonomous Driving using Sim-to-Real
Optical Flow Based Motion Detection for Autonomous Driving
Learning from All Vehicles
Keyword: mapping
Gated Domain-Invariant Feature Disentanglement for Domain Generalizable Object Detection
Root-aligned SMILES for Molecular Retrosynthesis Prediction
The neighbour sum distinguishing edge-weighting with local constraints
BERT-ASC: Auxiliary-Sentence Construction for Implicit Aspect Learning in Sentiment Analysis
Monitoring and mapping of crop fields with UAV swarms based on information gain
Cross-View Panorama Image Synthesis
Keyword: localization
Using Evolutionary Coupling to Establish Relevance Links Between Tests and Code Units. A case study on fault localization
Audio visual character profiles for detecting background characters in entertainment media
Ray3D: ray-based 3D human pose estimation for monocular absolute 3D localization