Abstract
Machine unlearning (MU) is a field that is gaining increasing attention due to the need to remove or modify predictions made by machine learning (ML) models. While training models have become more efficient and accurate, the importance of unlearning previously learned information has become increasingly significant in fields such as privacy, security, and fairness. This paper presents a comprehensive survey of MU, covering current state-of-the-art techniques and approaches, including data deletion, perturbation, and model updates. In addition, commonly used metrics and datasets are also presented. The paper also highlights the challenges that need to be addressed, including attack sophistication, standardization, transferability, interpretability, training data, and resource constraints. The contributions of this paper include discussions about the potential benefits of MU and its future directions in Natural Language Processing, Computer vision, and Recommender Systems. Additionally, the paper emphasizes the need for researchers and practitioners to continue exploring and refining unlearning techniques to ensure that ML models can adapt to changing circumstances while maintaining user trust. The importance of unlearning is further highlighted in making Artificial Intelligence (AI) more trustworthy and transparent, especially with the increasing importance of AI in various domains that involve large amounts of personal user data
Efficient Training of Multi-task Neural Solver with Multi-armed Bandits
Abstract
Efficiently training a multi-task neural solver for various combinatorial optimization problems (COPs) has been less studied so far. In this paper, we propose a general and efficient training paradigm based on multi-armed bandits to deliver a unified multi-task neural solver. To this end, we resort to the theoretical loss decomposition for multiple tasks under an encoder-decoder framework, which enables more efficient training via proper bandit task-sampling algorithms through an intra-task influence matrix. Our method achieves much higher overall performance with either limited training budgets or the same training epochs, compared to standard training schedules, which can be promising for advising efficient training of other multi-task large models. Additionally, the influence matrix can provide empirical evidence of some common practices in the area of learning to optimize, which in turn supports the validity of our approach.
ACTC: Active Threshold Calibration for Cold-Start Knowledge Graph Completion
Abstract
Self-supervised knowledge-graph completion (KGC) relies on estimating a scoring model over (entity, relation, entity)-tuples, for example, by embedding an initial knowledge graph. Prediction quality can be improved by calibrating the scoring model, typically by adjusting the prediction thresholds using manually annotated examples. In this paper, we attempt for the first time cold-start calibration for KGC, where no annotated examples exist initially for calibration, and only a limited number of tuples can be selected for annotation. Our new method ACTC finds good per-relation thresholds efficiently based on a limited set of annotated tuples. Additionally to a few annotated tuples, ACTC also leverages unlabeled tuples by estimating their correctness with Logistic Regression or Gaussian Process classifiers. We also experiment with different methods for selecting candidate tuples for annotation: density-based and random selection. Experiments with five scoring models and an oracle annotator show an improvement of 7% points when using ACTC in the challenging setting with an annotation budget of only 10 tuples, and an average improvement of 4% points over different budgets.
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM
Authors: Wen-Yu Hua, Brian Williams, Davood Shamsi
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Abstract
Text embeddings are useful features for several NLP applications, such as sentence similarity, text clustering, and semantic search. In this paper, we present a Low-rank Adaptation with a Contrastive objective on top of 8-bit Siamese-BLOOM, a multilingual large language model optimized to produce semantically meaningful word embeddings. The innovation is threefold. First, we cast BLOOM weights to 8-bit values. Second, we fine-tune BLOOM with a scalable adapter (LoRA) and 8-bit Adam optimizer for sentence similarity classification. Third, we apply a Siamese architecture on BLOOM model with a contrastive objective to ease the multi-lingual labeled data scarcity. The experiment results show the quality of learned embeddings from LACoS-BLOOM is proportional to the number of model parameters and the amount of unlabeled training data. With the parameter efficient fine-tuning design, we are able to run BLOOM 7.1 billion parameters end-to-end on a single GPU machine with 32GB memory. Compared to previous solution Sentence-BERT, we achieve significant improvement on both English and multi-lingual STS tasks.
Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning
Authors: Ahmad Al Harere, Khloud Al Jallad
Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Audio and Speech Processing (eess.AS)
Abstract
In Islam, readers must apply a set of pronunciation rules called Tajweed rules to recite the Quran in the same way that the angel Jibrael taught the Prophet, Muhammad. The traditional process of learning the correct application of these rules requires a human who must have a license and great experience to detect mispronunciation. Due to the increasing number of Muslims around the world, the number of Tajweed teachers is not enough nowadays for daily recitation practice for every Muslim. Therefore, lots of work has been done for automatic Tajweed rules' mispronunciation detection to help readers recite Quran correctly in an easier way and shorter time than traditional learning ways. All previous works have three common problems. First, most of them focused on machine learning algorithms only. Second, they used private datasets with no benchmark to compare with. Third, they did not take into consideration the sequence of input data optimally, although the speech signal is time series. To overcome these problems, we proposed a solution that consists of Mel-Frequency Cepstral Coefficient (MFCC) features with Long Short-Term Memory (LSTM) neural networks which use the time series, to detect mispronunciation in Tajweed rules. In addition, our experiments were performed on a public dataset, the QDAT dataset, which contains more than 1500 voices of the correct and incorrect recitation of three Tajweed rules (Separate stretching , Tight Noon , and Hide ). To the best of our knowledge, the QDAT dataset has not been used by any research paper yet. We compared the performance of the proposed LSTM model with traditional machine learning algorithms used in SoTA. The LSTM model with time series showed clear superiority over traditional machine learning. The accuracy achieved by LSTM on the QDAT dataset was 96%, 95%, and 96% for the three rules (Separate stretching, Tight Noon, and Hide), respectively.
A Generalizable Physics-informed Learning Framework for Risk Probability Estimation
Authors: Zhuoyuan Wang, Yorie Nakahira
Subjects: Systems and Control (eess.SY); Machine Learning (cs.LG)
Abstract
Accurate estimates of long-term risk probabilities and their gradients are critical for many stochastic safe control methods. However, computing such risk probabilities in real-time and in unseen or changing environments is challenging. Monte Carlo (MC) methods cannot accurately evaluate the probabilities and their gradients as an infinitesimal devisor can amplify the sampling noise. In this paper, we develop an efficient method to evaluate the probabilities of long-term risk and their gradients. The proposed method exploits the fact that long-term risk probability satisfies certain partial differential equations (PDEs), which characterize the neighboring relations between the probabilities, to integrate MC methods and physics-informed neural networks. We provide theoretical guarantees of the estimation error given certain choices of training configurations. Numerical results show the proposed method has better sample efficiency, generalizes well to unseen regions, and can adapt to systems with changing parameters. The proposed method can also accurately estimate the gradients of risk probabilities, which enables first- and second-order techniques on risk probabilities to be used for learning and control.
Multi-agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation
Abstract
We study multi-agent reinforcement learning in the setting of episodic Markov decision processes, where multiple agents cooperate via communication through a central server. We propose a provably efficient algorithm based on value iteration that enable asynchronous communication while ensuring the advantage of cooperation with low communication overhead. With linear function approximation, we prove that our algorithm enjoys an $\tilde{\mathcal{O}}(d^{3/2}H^2\sqrt{K})$ regret with $\tilde{\mathcal{O}}(dHM^2)$ communication complexity, where $d$ is the feature dimension, $H$ is the horizon length, $M$ is the total number of agents, and $K$ is the total number of episodes. We also provide a lower bound showing that a minimal $\Omega(dM)$ communication complexity is required to improve the performance through collaboration.
Perpetual Humanoid Control for Real-time Simulated Avatars
Abstract
We present a physics-based humanoid controller that achieves high-fidelity motion imitation and fault-tolerant behavior in the presence of noisy input (e.g. pose estimates from video or generated from language) and unexpected falls. Our controller scales up to learning ten thousand motion clips without using any external stabilizing forces and learns to naturally recover from fail-state. Given reference motion, our controller can perpetually control simulated avatars without requiring resets. At its core, we propose the progressive multiplicative control policy (PMCP), which dynamically allocates new network capacity to learn harder and harder motion sequences. PMCP allows efficient scaling for learning from large-scale motion databases and adding new tasks, such as fail-state recovery, without catastrophic forgetting. We demonstrate the effectiveness of our controller by using it to imitate noisy poses from video-based pose estimators and language-based motion generators in a live and real-time multi-person avatar use case.
SENDD: Sparse Efficient Neural Depth and Deformation for Tissue Tracking
Authors: Adam Schmidt, Omid Mohareri, Simon DiMaio, Septimiu E. Salcudean
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Deformable tracking and real-time estimation of 3D tissue motion is essential to enable automation and image guidance applications in robotically assisted surgery. Our model, Sparse Efficient Neural Depth and Deformation (SENDD), extends prior 2D tracking work to estimate flow in 3D space. SENDD introduces novel contributions of learned detection, and sparse per-point depth and 3D flow estimation, all with less than half a million parameters. SENDD does this by using graph neural networks of sparse keypoint matches to estimate both depth and 3D flow. We quantify and benchmark SENDD on a comprehensively labelled tissue dataset, and compare it to an equivalent 2D flow model. SENDD performs comparably while enabling applications that 2D flow cannot. SENDD can track points and estimate depth at 10fps on an NVIDIA RTX 4000 for 1280 tracked (query) points and its cost scales linearly with an increasing/decreasing number of points. SENDD enables multiple downstream applications that require 3D motion estimation.
Towards L-System Captioning for Tree Reconstruction
Authors: Jannes S. Magnusson, Anna Hilsmann, Peter Eisert
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
This work proposes a novel concept for tree and plant reconstruction by directly inferring a Lindenmayer-System (L-System) word representation from image data in an image captioning approach. We train a model end-to-end which is able to translate given images into L-System words as a description of the displayed tree. To prove this concept, we demonstrate the applicability on 2D tree topologies. Transferred to real image data, this novel idea could lead to more efficient, accurate and semantically meaningful tree and plant reconstruction without using error-prone point cloud extraction, and other processes usually utilized in tree reconstruction. Furthermore, this approach bypasses the need for a predefined L-System grammar and enables species-specific L-System inference without biological knowledge.
Treasure What You Have: Exploiting Similarity in Deep Neural Networks for Efficient Video Processing
Abstract
Deep learning has enabled various Internet of Things (IoT) applications. Still, designing models with high accuracy and computational efficiency remains a significant challenge, especially in real-time video processing applications. Such applications exhibit high inter- and intra-frame redundancy, allowing further improvement. This paper proposes a similarity-aware training methodology that exploits data redundancy in video frames for efficient processing. Our approach introduces a per-layer regularization that enhances computation reuse by increasing the similarity of weights during training. We validate our methodology on two critical real-time applications, lane detection and scene parsing. We observe an average compression ratio of approximately 50% and a speedup of \sim 1.5x for different models while maintaining the same accuracy.
State Constrained Stochastic Optimal Control for Continuous and Hybrid Dynamical Systems Using DFBSDE
Authors: Bolun Dai, Prashanth Krishnamurthy, Andrew Papanicolaou, Farshad Khorrami
Abstract
We develop a computationally efficient learning-based forward-backward stochastic differential equations (FBSDE) controller for both continuous and hybrid dynamical (HD) systems subject to stochastic noise and state constraints. Solutions to stochastic optimal control (SOC) problems satisfy the Hamilton-Jacobi-Bellman (HJB) equation. Using current FBSDE-based solutions, the optimal control can be obtained from the HJB equations using deep neural networks (e.g., long short-term memory (LSTM) networks). To ensure the learned controller respects the constraint boundaries, we enforce the state constraints using a soft penalty function. In addition to previous works, we adapt the deep FBSDE (DFBSDE) control framework to handle HD systems consisting of continuous dynamics and a deterministic discrete state change. We demonstrate our proposed algorithm in simulation on a continuous nonlinear system (cart-pole) and a hybrid nonlinear system (five-link biped).
A fast topological approach for predicting anomalies in time-varying graphs
Abstract
Large time-varying graphs are increasingly common in financial, social and biological settings. Feature extraction that efficiently encodes the complex structure of sparse, multi-layered, dynamic graphs presents computational and methodological challenges. In the past decade, a persistence diagram (PD) from topological data analysis (TDA) has become a popular descriptor of shape of data with a well-defined distance between points. However, applications of TDA to graphs, where there is no intrinsic concept of distance between the nodes, remain largely unexplored. This paper addresses this gap in the literature by introducing a computationally efficient framework to extract shape information from graph data. Our framework has two main steps: first, we compute a PD using the so-called lower-star filtration which utilizes quantitative node attributes, and then vectorize it by averaging the associated Betti function over successive scale values on a one-dimensional grid. Our approach avoids embedding a graph into a metric space and has stability properties against input noise. In simulation studies, we show that the proposed vector summary leads to improved change point detection rate in time-varying graphs. In a real data application, our approach provides up to 22% gain in anomalous price prediction for the Ethereum cryptocurrency transaction networks.
Abstract
The primary challenge in video super-resolution (VSR) is to handle large motions in the input frames, which makes it difficult to accurately aggregate information from multiple frames. Existing works either adopt deformable convolutions or estimate optical flow as a prior to establish correspondences between frames for the effective alignment and fusion. However, they fail to take into account the valuable semantic information that can greatly enhance it; and flow-based methods heavily rely on the accuracy of a flow estimate model, which may not provide precise flows given two low-resolution frames. In this paper, we investigate a more robust and semantic-aware prior for enhanced VSR by utilizing the Segment Anything Model (SAM), a powerful foundational model that is less susceptible to image degradation. To use the SAM-based prior, we propose a simple yet effective module -- SAM-guidEd refinEment Module (SEEM), which can enhance both alignment and fusion procedures by the utilization of semantic information. This light-weight plug-in module is specifically designed to not only leverage the attention mechanism for the generation of semantic-aware feature but also be easily and seamlessly integrated into existing methods. Concretely, we apply our SEEM to two representative methods, EDVR and BasicVSR, resulting in consistently improved performance with minimal implementation effort, on three widely used VSR datasets: Vimeo-90K, REDS and Vid4. More importantly, we found that the proposed SEEM can advance the existing methods in an efficient tuning manner, providing increased flexibility in adjusting the balance between performance and the number of training parameters. Code will be open-source soon.
Probabilistic Group Testing in Distributed Computing with Attacked Workers
Authors: Sarthak Jain, Martina Cardone, Soheil Mohajer
Subjects: Information Theory (cs.IT); Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
The problem of distributed matrix-vector product is considered, where the server distributes the task of the computation among $n$ worker nodes, out of which $L$ are compromised (but non-colluding) and may return incorrect results. Specifically, it is assumed that the compromised workers are unreliable, that is, at any given time, each compromised worker may return an incorrect and correct result with probabilities $\alpha$ and $1-\alpha$, respectively. Thus, the tests are noisy. This work proposes a new probabilistic group testing approach to identify the unreliable/compromised workers with $O\left(\frac{L\log(n)}{\alpha}\right)$ tests. Moreover, using the proposed group testing method, sparse parity-check codes are constructed and used in the considered distributed computing framework for encoding, decoding and identifying the unreliable workers. This methodology has two distinct features: (i) the cost of identifying the set of $L$ unreliable workers at the server can be shown to be considerably lower than existing distributed computing methods, and (ii) the encoding and decoding functions are easily implementable and computationally efficient.
A Semi-Automated Hybrid Schema Matching Framework for Vegetation Data Integration
Authors: Md Asif-Ur-Rahman, Bayzid Ashik Hossain, Michael Bewong, Md Zahidul Islam, Yanchang Zhao, Jeremy Groves, Rory Judith
Abstract
Integrating disparate and distributed vegetation data is critical for consistent and informed national policy development and management. Australia's National Vegetation Information System (NVIS) under the Department of Climate Change, Energy, the Environment and Water (DCCEEW) is the only nationally consistent vegetation database and hierarchical typology of vegetation types in different locations. Currently, this database employs manual approaches for integrating disparate state and territory datasets which is labour intensive and can be prone to human errors. To cope with the ever-increasing need for up to date vegetation data derived from heterogeneous data sources, a Semi-Automated Hybrid Matcher (SAHM) is proposed in this paper. SAHM utilizes both schema level and instance level matching following a two-tier matching framework. A key novel technique in SAHM called Multivariate Statistical Matching is proposed for automated schema scoring which takes advantage of domain knowledge and correlations between attributes to enhance the matching. To verify the effectiveness of the proposed framework, the performance of the individual as well as combined components of SAHM have been evaluated. The empirical evaluation shows the effectiveness of the proposed framework which outperforms existing state of the art methods like Cupid, Coma, Similarity Flooding, Jaccard Leven Matcher, Distribution Based Matcher, and EmbDI. In particular, SAHM achieves between 88% and 100% accuracy with significantly better F1 scores in comparison with state-of-the-art techniques. SAHM is also shown to be several orders of magnitude more efficient than existing techniques.
Patch-wise Mixed-Precision Quantization of Vision Transformer
Abstract
As emerging hardware begins to support mixed bit-width arithmetic computation, mixed-precision quantization is widely used to reduce the complexity of neural networks. However, Vision Transformers (ViTs) require complex self-attention computation to guarantee the learning of powerful feature representations, which makes mixed-precision quantization of ViTs still challenging. In this paper, we propose a novel patch-wise mixed-precision quantization (PMQ) for efficient inference of ViTs. Specifically, we design a lightweight global metric, which is faster than existing methods, to measure the sensitivity of each component in ViTs to quantization errors. Moreover, we also introduce a pareto frontier approach to automatically allocate the optimal bit-precision according to the sensitivity. To further reduce the computational complexity of self-attention in inference stage, we propose a patch-wise module to reallocate bit-width of patches in each layer. Extensive experiments on the ImageNet dataset shows that our method greatly reduces the search cost and facilitates the application of mixed-precision quantization to ViTs.
Exploiting Fine-Grained DCT Representations for Hiding Image-Level Messages within JPEG Images
Authors: Junxue Yang, Xin Liao
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Unlike hiding bit-level messages, hiding image-level messages is more challenging, which requires large capacity, high imperceptibility, and high security. Although recent advances in hiding image-level messages have been remarkable, existing schemes are limited to lossless spatial images as covers and cannot be directly applied to JPEG images, the ubiquitous lossy format images in daily life. The difficulties of migration are caused by the lack of targeted design and the loss of details due to lossy decompression and re-compression. Considering that taking DCT densely on $8\times8$ image patches is the core of the JPEG compression standard, we design a novel model called \textsf{EFDR}, which can comprehensively \underline{E}xploit \underline{F}ine-grained \underline{D}CT \underline{R}epresentations and embed the secret image into quantized DCT coefficients to avoid the lossy process. Specifically, we transform the JPEG cover image and hidden secret image into fine-grained DCT representations that compact the frequency and are associated with the inter-block and intra-block correlations. Subsequently, the fine-grained DCT representations are further enhanced by a sub-band features enhancement module. Afterward, a transformer-based invertibility module is designed to fuse enhanced sub-band features. Such a design enables a fine-grained self-attention on each sub-band and captures long-range dependencies while maintaining excellent reversibility for hiding and recovery. To our best knowledge, this is the first attempt to embed a color image of equal size in a color JPEG image. Extensive experiments demonstrate the effectiveness of our \textsf{EFDR} with superior performance.
Active Learning in the Predict-then-Optimize Framework: A Margin-Based Approach
Authors: Mo Liu, Paul Grigas, Heyuan Liu, Zuo-Jun Max Shen
Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML)
Abstract
We develop the first active learning method in the predict-then-optimize framework. Specifically, we develop a learning method that sequentially decides whether to request the "labels" of feature samples from an unlabeled data stream, where the labels correspond to the parameters of an optimization model for decision-making. Our active learning method is the first to be directly informed by the decision error induced by the predicted parameters, which is referred to as the Smart Predict-then-Optimize (SPO) loss. Motivated by the structure of the SPO loss, our algorithm adopts a margin-based criterion utilizing the concept of distance to degeneracy and minimizes a tractable surrogate of the SPO loss on the collected data. In particular, we develop an efficient active learning algorithm with both hard and soft rejection variants, each with theoretical excess risk (i.e., generalization) guarantees. We further derive bounds on the label complexity, which refers to the number of samples whose labels are acquired to achieve a desired small level of SPO risk. Under some natural low-noise conditions, we show that these bounds can be better than the naive supervised learning approach that labels all samples. Furthermore, when using the SPO+ loss function, a specialized surrogate of the SPO loss, we derive a significantly smaller label complexity under separability conditions. We also present numerical evidence showing the practical value of our proposed algorithms in the settings of personalized pricing and the shortest path problem.
Robust stability of moving horizon estimation for continuous-time systems
Abstract
We consider a moving horizon estimation (MHE) scheme involving a discounted least squares objective for general nonlinear continuous-time systems. Provided that the system is detectable (incrementally integral input/output-to-state stable, i-iIOSS), we show that there exists a sufficiently long estimation horizon that guarantees robust global exponential stability of the estimation error in an $L^2$-to-$L^\infty$ sense. In addition, we show that i-iIOSS Lyapunov functions can be efficiently constructed by verifying certain linear matrix inequality conditions. In combination, we propose a flexible Lyapunov-based MHE framework in continuous time, which particularly offers more tuning possibilities than its discrete-time analog, and provide sufficient conditions for stability that can be easily verified in practice. Our results are illustrated by a numerical example.
PVT-SSD: Single-Stage 3D Object Detector with Point-Voxel Transformer
Abstract
Recent Transformer-based 3D object detectors learn point cloud features either from point- or voxel-based representations. However, the former requires time-consuming sampling while the latter introduces quantization errors. In this paper, we present a novel Point-Voxel Transformer for single-stage 3D detection (PVT-SSD) that takes advantage of these two representations. Specifically, we first use voxel-based sparse convolutions for efficient feature encoding. Then, we propose a Point-Voxel Transformer (PVT) module that obtains long-range contexts in a cheap manner from voxels while attaining accurate positions from points. The key to associating the two different representations is our introduced input-dependent Query Initialization module, which could efficiently generate reference points and content queries. Then, PVT adaptively fuses long-range contextual and local geometric information around reference points into content queries. Further, to quickly find the neighboring points of reference points, we design the Virtual Range Image module, which generalizes the native range image to multi-sensor and multi-frame. The experiments on several autonomous driving benchmarks verify the effectiveness and efficiency of the proposed method. Code will be available at https://github.com/Nightmare-n/PVT-SSD.
Joint Identification and Sensing for Discrete Memoryless Channels
Authors: Wafa Labidi, Christian Deppe, Holger Boche
Abstract
In the identification (ID) scheme proposed by Ahlswede and Dueck, the receiver only checks whether a message of special interest to him has been sent or not. In contrast to Shannon transmission codes, the size of ID codes for a Discrete Memoryless Channel (DMC) grows doubly exponentially fast with the blocklength, if randomized encoding is used. This groundbreaking result makes the ID paradigm more efficient than the classical Shannon transmission in terms of necessary energy and hardware components. Further gains can be achieved by taking advantage of additional resources such as feedback. We study the problem of joint ID and channel state estimation over a DMC with independent and identically distributed (i.i.d.) state sequences. The sender simultaneously sends an ID message over the DMC with a random state and estimates the channel state via a strictly causal channel output. The random channel state is available to neither the sender nor the receiver. For the proposed system model, we establish a lower bound on the ID capacity-distortion function.
Adaptive Privacy-Preserving Coded Computing With Hierarchical Task Partitioning
Authors: Qicheng Zeng, Zhaojun Nan, Sheng Zhou
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
Abstract
Distributed computing is known as an emerging and efficient technique to support various intelligent services, such as large-scale machine learning. However, privacy leakage and random delays from straggling servers pose significant challenges. To address these issues, coded computing, a promising solution that combines coding theory with distributed computing, recovers computation tasks with results from a subset of workers. In this paper, we propose the adaptive privacy-preserving coded computing (APCC) strategy, which can adaptively provide accurate or approximated results according to the form of computation functions, so as to suit diverse types of computation tasks. We prove that APCC achieves complete data privacy preservation and demonstrate its optimality in terms of encoding rate, defined as the ratio between the computation loads of tasks before and after encoding. To further alleviate the straggling effect and reduce delay, we integrate hierarchical task partitioning and task cancellation into the coding design of APCC. The corresponding partitioning problems are formulated as mixed-integer nonlinear programming (MINLP) problems with the objective of minimizing task completion delay. We propose a low-complexity maximum value descent (MVD) algorithm to optimally solve these problems. Simulation results show that APCC can reduce task completion delay by at least 42.9% compared to other state-of-the-art benchmarks.
On practical robust reinforcement learning: adjacent uncertainty set and double-agent algorithm
Abstract
Robust reinforcement learning (RL) aims at learning a policy that optimizes the worst-case performance over an uncertainty set. Given nominal Markov decision process (N-MDP) that generates samples for training, the set contains MDPs obtained by some perturbations from N-MDP. In this paper, we introduce a new uncertainty set containing more realistic MDPs in practice than the existing sets. Using this uncertainty set, we present a robust RL, named ARQ-Learning, for tabular cases. Also, we characterize the finite-time error bounds and prove that it converges as fast as Q-Learning and robust Q-Learning (i.e., the state-of-the-art robust RL method) while providing better robustness for real applications. We propose {\em pessimistic agent} that efficiently tackles the key bottleneck for the extension of ARQ-Learning into large or continuous state spaces. Using this technique, we first propose PRQ-Learning. To the next, combining this with DQN and DDPG, we develop PR-DQN and PR-DDPG, respectively. We emphasize that our technique can be easily combined with the other popular model-free methods. Via experiments, we demonstrate the superiority of the proposed methods in various RL applications with model uncertainties.
On the convergence of the MLE as an estimator of the learning rate in the Exp3 algorithm
Abstract
When fitting the learning data of an individual to algorithm-like learning models, the observations are so dependent and non-stationary that one may wonder what the classical Maximum Likelihood Estimator (MLE) could do, even if it is the usual tool applied to experimental cognition. Our objective in this work is to show that the estimation of the learning rate cannot be efficient if the learning rate is constant in the classical Exp3 (Exponential weights for Exploration and Exploitation) algorithm. Secondly, we show that if the learning rate decreases polynomially with the sample size, then the prediction error and in some cases the estimation error of the MLE satisfy bounds in probability that decrease at a polynomial rate.
INGENIOUS: Using Informative Data Subsets for Efficient Pre-Training of Large Language Models
Authors: H S V N S Kowndinya Renduchintala, Krishnateja Killamsetty, Sumit Bhatia, Milan Aggarwal, Ganesh Ramakrishnan, Rishabh Iyer, Balaji Krishnamurthy
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Abstract
A salient characteristic of large pre-trained language models (PTLMs) is a remarkable improvement in their generalization capability and emergence of new capabilities with increasing model capacity and pre-training dataset size. Consequently, we are witnessing the development of enormous models pushing the state-of-the-art. It is, however, imperative to realize that this inevitably leads to prohibitively long training times, extortionate computing costs, and a detrimental environmental impact. Significant efforts are underway to make PTLM training more efficient through innovations in model architectures, training pipelines, and loss function design, with scant attention being paid to optimizing the utility of training data. The key question that we ask is whether it is possible to train PTLMs by employing only highly informative subsets of the training data while maintaining downstream performance? Building upon the recent progress in informative data subset selection, we show how we can employ submodular optimization to select highly representative subsets of the training corpora. Our results demonstrate that the proposed framework can be applied to efficiently train multiple PTLMs (BERT, BioBERT, GPT-2) using only a fraction of data while retaining up to $\sim99\%$ of the performance of the fully-trained models.
Differentiable Programming: Efficient Smoothing of Control-Flow-Induced Discontinuities
Abstract
We want to obtain derivatives in discontinuous program code, where default Algorithmic Differentiation may not perform well. Specifically, we consider discontinuities induced by control flow statements, where meaningful derivatives should ideally be capable of representing the resulting kinks in the trajectory. To achieve this, one can interpolate the trajectory at the control flow statements before taking the derivative. We formulate a method to efficiently interpolate between all boundaries induced by control flow in program code. Theoretically, code can be viewed as a series of piecewise continuous functions applied in succession. These functions are nested inside one another and result in a function composition with several cases. We interpret this function composition as a tree and devise a heuristic to identify paths that are relevant to the interpolation. This allows us to conceive a language that smoothly interpolates control-flow statements automatically and efficiently, making it fully differentiable.
NUBO: A Transparent Python Package for Bayesian Optimisation
Authors: Mike Diessner, Kevin Wilson, Richard D. Whalley
Abstract
NUBO, short for Newcastle University Bayesian Optimisation, is a Bayesian optimisation framework for the optimisation of expensive-to-evaluate black-box functions, such as physical experiments and computer simulators. Bayesian optimisation is a cost-efficient optimisation strategy that uses surrogate modelling via Gaussian processes to represent an objective function and acquisition functions to guide the selection of candidate points to approximate the global optimum of the objective function. NUBO itself focuses on transparency and user experience to make Bayesian optimisation easily accessible to researchers from all disciplines. Clean and understandable code, precise references, and thorough documentation ensure transparency, while user experience is ensured by a modular and flexible design, easy-to-write syntax, and careful selection of Bayesian optimisation algorithms. NUBO allows users to tailor Bayesian optimisation to their specific problem by writing the optimisation loop themselves using the provided building blocks. It supports sequential single-point, parallel multi-point, and asynchronous optimisation of bounded, constrained, and/or mixed (discrete and continuous) parameter input spaces. Only algorithms and methods that are extensively tested and validated to perform well are included in NUBO. This ensures that the package remains compact and does not overwhelm the user with an unnecessarily large number of options. The package is written in Python but does not require expert knowledge of Python to optimise your simulators and experiments. NUBO is distributed as open-source software under the BSD 3-Clause licence.
Null-text Guidance in Diffusion Models is Secretly a Cartoon-style Creator
Authors: Jing Zhao, Heliang Zheng, Chaoyue Wang, Long Lan, Wanrong Huang, Wenjing Yang
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Classifier-free guidance is an effective sampling technique in diffusion models that has been widely adopted. The main idea is to extrapolate the model in the direction of text guidance and away from null-text guidance. In this paper, we demonstrate that null-text guidance in diffusion models is secretly a cartoon-style creator, i.e., the generated images can be efficiently transformed into cartoons by simply perturbing the null-text guidance. Specifically, we proposed two disturbance methods, i.e., Rollback disturbance (Back-D) and Image disturbance (Image-D), to construct misalignment between the noisy images used for predicting null-text guidance and text guidance (subsequently referred to as \textbf{null-text noisy image} and \textbf{text noisy image} respectively) in the sampling process. Back-D achieves cartoonization by altering the noise level of null-text noisy image via replacing $xt$ with $x{t+\Delta t}$. Image-D, alternatively, produces high-fidelity, diverse cartoons by defining $x_t$ as a clean input image, which further improves the incorporation of finer image details. Through comprehensive experiments, we delved into the principle of noise disturbing for null-text and uncovered that the efficacy of disturbance depends on the correlation between the null-text noisy image and the source image. Moreover, our proposed techniques, which can generate cartoon images and cartoonize specific ones, are training-free and easily integrated as a plug-and-play component in any classifier-free guided diffusion model. Project page is available at \url{https://nulltextforcartoon.github.io/}.
Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond
Authors: Zhu Liu, Jinyuan Liu, Guanyao Wu, Long Ma, Xin Fan, Risheng Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recently, multi-modality scene perception tasks, e.g., image fusion and scene understanding, have attracted widespread attention for intelligent vision systems. However, early efforts always consider boosting a single task unilaterally and neglecting others, seldom investigating their underlying connections for joint promotion. To overcome these limitations, we establish the hierarchical dual tasks-driven deep model to bridge these tasks. Concretely, we firstly construct an image fusion module to fuse complementary characteristics and cascade dual task-related modules, including a discriminator for visual effects and a semantic network for feature measurement. We provide a bi-level perspective to formulate image fusion and follow-up downstream tasks. To incorporate distinct task-related responses for image fusion, we consider image fusion as a primary goal and dual modules as learnable constraints. Furthermore, we develop an efficient first-order approximation to compute corresponding gradients and present dynamic weighted aggregation to balance the gradients for fusion learning. Extensive experiments demonstrate the superiority of our method, which not only produces visually pleasant fused results but also realizes significant promotion for detection and segmentation than the state-of-the-art approaches.
IVP-VAE: Modeling EHR Time Series with Initial Value Problem Solvers
Abstract
Continuous-time models such as Neural ODEs and Neural Flows have shown promising results in analyzing irregularly sampled time series frequently encountered in electronic health records. Based on these models, time series are typically processed with a hybrid of an initial value problem (IVP) solver and a recurrent neural network within the variational autoencoder architecture. Sequentially solving IVPs makes such models computationally less efficient. In this paper, we propose to model time series purely with continuous processes whose state evolution can be approximated directly by IVPs. This eliminates the need for recurrent computation and enables multiple states to evolve in parallel. We further fuse the encoder and decoder with one IVP solver based on its invertibility, which leads to fewer parameters and faster convergence. Experiments on three real-world datasets show that the proposed approach achieves comparable extrapolation and classification performance while gaining more than one order of magnitude speedup over other continuous-time counterparts.
Simplification of General Mixed Boolean-Arithmetic Expressions: GAMBA
Authors: Benjamin Reichenwallner, Peter Meerwald-Stadler
Abstract
Malware code often resorts to various self-protection techniques to complicate analysis. One such technique is applying Mixed-Boolean Arithmetic (MBA) expressions as a way to create opaque predicates and diversify and obfuscate the data flow. In this work we aim to provide tools for the simplification of nonlinear MBA expressions in a very practical context to compete in the arms race between the generation of hard, diverse MBAs and their analysis. The proposed algorithm GAMBA employs algebraic rewriting at its core and extends SiMBA. It achieves efficient deobfuscation of MBA expressions from the most widely tested public datasets and simplifies expressions to their ground truths in most cases, surpassing peer tools.
Traceability and Reuse Mechanisms, the most important Properties of Model Transformation Languages
Abstract
Dedicated model transformation languages are claimed to provide many benefits over the use of general purpose languages for developing model transformations. However, the actual advantages associated with the use of MTLs are poorly understood empirically. There is little knowledge and empirical assessment about what advantages and disadvantages hold and where they originate from. In a prior interview study, we elicited expert opinions on what advantages result from what factors and a number of factors that moderate the influence. We aim to quantitatively asses the interview results to confirm or reject the effects posed by different factors. We intend to gain insights into how valuable different factors are so that future studies can draw on these data for designing targeted and relevant studies. We gather data on the factors and quality attributes using an online survey. To analyse the data, we use universal structure modelling based on a structure model. We use significance values and path coefficients produced bz USM for each hypothesised interdependence to confirm or reject correlation and to weigh the strength of influence present. We analyzed 113 responses. The results show that the Tracing and Reuse Mechanisms are most important overall. Though the observed effects were generally 10 times lower than anticipated. Additionally, we found that a more nuanced view of moderation effects is warranted. Their moderating influence differed significantly between the different influences, with the strongest effects being 1000 times higher than the weakest. The empirical assessment of MTLs is a complex topic that cannot be solved by looking at a single stand-alone factor. Our results provide clear indication that evaluation should consider transformations of different sizes and use-cases. Language development should focus on providing transformation specific reuse mechanisms .
A Data-Driven Approach to Lightweight DVFS-Aware Counter-Based Power Modeling for Heterogeneous Platforms
Authors: Sergio Mazzola, Thomas Benz, Björn Forsberg, Luca Benini
Abstract
Computing systems have shifted towards highly parallel and heterogeneous architectures to tackle the challenges imposed by limited power budgets. These architectures must be supported by novel power management paradigms addressing the increasing design size, parallelism, and heterogeneity while ensuring high accuracy and low overhead. In this work, we propose a systematic, automated, and architecture-agnostic approach to accurate and lightweight DVFS-aware statistical power modeling of the CPU and GPU sub-systems of a heterogeneous platform, driven by the sub-systems' local performance monitoring counters (PMCs). Counter selection is guided by a generally applicable statistical method that identifies the minimal subsets of counters robustly correlating to power dissipation. Based on the selected counters, we train a set of lightweight, linear models characterizing each sub-system over a range of frequencies. Such models compose a lookup-table-based system-level model that efficiently captures the non-linearity of power consumption, showing desirable responsiveness and decomposability. We validate the system-level model on real hardware by measuring the total energy consumption of an NVIDIA Jetson AGX Xavier platform over a set of benchmarks. The resulting average estimation error is 1.3%, with a maximum of 3.1%. Furthermore, the model shows a maximum evaluation runtime of 500 ns, thus implying a negligible impact on system utilization and applicability to online dynamic power management (DPM).
Utility-Maximizing Bidding Strategy for Data Consumers in Auction-based Federated Learning
Authors: Xiaoli Tang, Han Yu
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Science and Game Theory (cs.GT)
Abstract
Auction-based Federated Learning (AFL) has attracted extensive research interest due to its ability to motivate data owners to join FL through economic means. Existing works assume that only one data consumer and multiple data owners exist in an AFL marketplace (i.e., a monopoly market). Therefore, data owners bid to join the data consumer for FL. However, this assumption is not realistic in practical AFL marketplaces in which multiple data consumers can compete to attract data owners to join their respective FL tasks. In this paper, we bridge this gap by proposing a first-of-its-kind utility-maximizing bidding strategy for data consumers in federated learning (Fed-Bidder). It enables multiple FL data consumers to compete for data owners via AFL effectively and efficiently by providing with utility estimation capabilities which can accommodate diverse forms of winning functions, each reflecting different market dynamics. Extensive experiments based on six commonly adopted benchmark datasets show that Fed-Bidder is significantly more advantageous compared to four state-of-the-art approaches.
Information Design in Multi-Agent Reinforcement Learning
Authors: Yue Lin, Wenhao Li, Hongyuan Zha, Baoxiang Wang
Subjects: Computer Science and Game Theory (cs.GT); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Machine Learning (stat.ML)
Abstract
Reinforcement learning (RL) mimics how humans and animals interact with the environment. The setting is somewhat idealized because, in actual tasks, other agents in the environment have their own goals and behave adaptively to the ego agent. To thrive in those environments, the agent needs to influence other agents so their actions become more helpful and less harmful. Research in computational economics distills two ways to influence others directly: by providing tangible goods (mechanism design) and by providing information (information design). This work investigates information design problems for a group of RL agents. The main challenges are two-fold. One is the information provided will immediately affect the transition of the agent trajectories, which introduces additional non-stationarity. The other is the information can be ignored, so the sender must provide information that the receivers are willing to respect. We formulate the Markov signaling game, and develop the notions of signaling gradient and the extended obedience constraints that address these challenges. Our algorithm is efficient on various mixed-motive tasks and provides further insights into computational economics. Our code is available at https://github.com/YueLin301/InformationDesignMARL.
DeepSTEP -- Deep Learning-Based Spatio-Temporal End-To-End Perception for Autonomous Vehicles
Authors: Sebastian Huch, Florian Sauerbeck, Johannes Betz
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Autonomous vehicles demand high accuracy and robustness of perception algorithms. To develop efficient and scalable perception algorithms, the maximum information should be extracted from the available sensor data. In this work, we present our concept for an end-to-end perception architecture, named DeepSTEP. The deep learning-based architecture processes raw sensor data from the camera, LiDAR, and RaDAR, and combines the extracted data in a deep fusion network. The output of this deep fusion network is a shared feature space, which is used by perception head networks to fulfill several perception tasks, such as object detection or local mapping. DeepSTEP incorporates multiple ideas to advance state of the art: First, combining detection and localization into a single pipeline allows for efficient processing to reduce computational overhead and further improves overall performance. Second, the architecture leverages the temporal domain by using a self-attention mechanism that focuses on the most important features. We believe that our concept of DeepSTEP will advance the development of end-to-end perception systems. The network will be deployed on our research vehicle, which will be used as a platform for data collection, real-world testing, and validation. In conclusion, DeepSTEP represents a significant advancement in the field of perception for autonomous vehicles. The architecture's end-to-end design, time-aware attention mechanism, and integration of multiple perception tasks make it a promising solution for real-world deployment. This research is a work in progress and presents the first concept of establishing a novel perception pipeline.
Abstract
We establish new separations between the power of monotone and general (non-monotone) Boolean circuits: - For every $k \geq 1$, there is a monotone function in ${\sf AC^0}$ that requires monotone circuits of depth $\Omega(\log^k n)$. This significantly extends a classical result of Okol'nishnikova (1982) and Ajtai and Gurevich (1987). In addition, our separation holds for a monotone graph property, which was unknown even in the context of ${\sf AC^0}$ versus ${\sf mAC^0}$. - For every $k \geq 1$, there is a monotone function in ${\sf AC^0}[\oplus]$ that requires monotone circuits of size $\exp(\Omega(\log^k n))$. This makes progress towards a question posed by Grigni and Sipser (1992). These results show that constant-depth circuits can be more efficient than monotone circuits when computing monotone functions. In the opposite direction, we observe that non-trivial simulations are possible in the absence of parity gates: every monotone function computed by an ${\sf AC^0}$ circuit of size $s$ and depth $d$ can be computed by a monotone circuit of size $2^{n - n/O(\log s)^{d-1}}$. We show that the existence of significantly faster monotone simulations would lead to breakthrough circuit lower bounds. In particular, if every monotone function in ${\sf AC^0}$ admits a polynomial size monotone circuit, then ${\sf NC^2}$ is not contained in ${\sf NC^1}$ . Finally, we revisit our separation result against monotone circuit size and investigate the limits of our approach, which is based on a monotone lower bound for constraint satisfaction problems established by G\"o\"os et al. (2019) via lifting techniques. Adapting results of Schaefer (1978) and Allender et al. (2009), we obtain an unconditional classification of the monotone circuit complexity of Boolean-valued CSPs via their polymorphisms. This result and the consequences we derive from it might be of independent interest.
Multigrid preconditioning of singularly perturbed convection-diffusion equations
Authors: M. Shahid, S.P. MacLachlan, H. bin Zubair Syed
Abstract
Boundary value problems based on the convection-diffusion equation arise naturally in models of fluid flow across a variety of engineering applications and design feasibility studies. Naturally, their efficient numerical solution has continued to be an interesting and active topic of research for decades. In the context of finite-element discretization of these boundary value problems, the Streamline Upwind Petrov-Galerkin (SUPG) technique yields accurate discretization in the singularly perturbed regime. In this paper, we propose efficient multigrid iterative solution methods for the resulting linear systems. In particular, we show that techniques from standard multigrid for anisotropic problems can be adapted to these discretizations on both tensor-product as well as semi-structured meshes. The resulting methods are demonstrated to be robust preconditioners for several standard flow benchmarks.
A Generic Approach to Integrating Time into Spatial-Temporal Forecasting via Conditional Neural Fields
Abstract
Self-awareness is the key capability of autonomous systems, e.g., autonomous driving network, which relies on highly efficient time series forecasting algorithm to enable the system to reason about the future state of the environment, as well as its effect on the system behavior as time progresses. Recently, a large number of forecasting algorithms using either convolutional neural networks or graph neural networks have been developed to exploit the complex temporal and spatial dependencies present in the time series. While these solutions have shown significant advantages over statistical approaches, one open question is to effectively incorporate the global information which represents the seasonality patterns via the time component of time series into the forecasting models to improve their accuracy. This paper presents a general approach to integrating the time component into forecasting models. The main idea is to employ conditional neural fields to represent the auxiliary features extracted from the time component to obtain the global information, which will be effectively combined with the local information extracted from autoregressive neural networks through a layer-wise gated fusion module. Extensive experiments on road traffic and cellular network traffic datasets prove the effectiveness of the proposed approach.
Emotion Recognition for Challenged People Facial Appearance in Social using Neural Network
Authors: P. Deivendran, P. Suresh Babu, G. Malathi, K. Anbazhagan, R. Senthil Kumar
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Human communication is the vocal and non verbal signal to communicate with others. Human expression is a significant biometric object in picture and record databases of surveillance systems. Face appreciation has a serious role in biometric methods and is good-looking for plentiful applications, including visual scrutiny and security. Facial expressions are a form of nonverbal communication; recognizing them helps improve the human machine interaction. This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images. In order on, the person's face can be computed. Face expression is used in CNN classifier to categorize the acquired picture into different emotion categories. It is a deep, feed-forward artificial neural network. Outcome surpasses human presentation and shows poses alternate performance. Varying lighting conditions can influence the fitting process and reduce recognition precision. Results illustrate that dependable facial appearance credited with changing lighting conditions for separating reasonable facial terminology display emotions is an efficient representation of clean and assorted moving expressions. This process can also manage the proportions of dissimilar basic affecting expressions of those mixed jointly to produce sensible emotional facial expressions. Our system contains a pre-defined data set, which was residential by a statistics scientist and includes all pure and varied expressions. On average, a data set has achieved 92.4% exact validation of the expressions synthesized by our technique. These facial expressions are compared through the pre-defined data-position inside our system. If it recognizes the person in an abnormal condition, an alert will be passed to the nearby hospital/doctor seeing that a message.
Detection and Classification of Pole-like Landmarks for Domain-invariant 3D Point Cloud Map Matching
Authors: Sun Yifei, Li Dingrui, Ye Minying, Tanaka Kanji
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
In 3D point cloud-based visual self-localization, pole landmarks have a great potential as landmarks for accurate and reliable localization due to their long-term stability under seasonal and weather changes. In this study, we aim to explore the use of recently developed deep learning models for pole classification in the context of pole landmark-based self-localization. Specifically, the proposed scheme consists of two main modules: pole map matching and pole class matching. In the former module, local pole map is constructed and its configuration is compared against a precomputed global pole map. An efficient RANSAC map matching is employed to achieve a good tradeoff between computational efficiency and accuracy. In the latter pole class matching module, the local and global poles paired by the RANSAC map-matching are further compared by means of pole attribute class. To this end, a predefined set of pseudo pole classes is learned via k-means clustering in a self-supervised manner. Experiments using publicly available NCLT dataset showed that the pole-like landmark classification method has an improved effect on the visual self-localization system compared with the baseline method.
Enhancing Datalog Reasoning with Hypertree Decompositions
Authors: Xinyue Zhang, Pan Hu, Yavor Nenov, Ian Horrocks
Abstract
Datalog reasoning based on the semina\"ive evaluation strategy evaluates rules using traditional join plans, which often leads to redundancy and inefficiency in practice, especially when the rules are complex. Hypertree decompositions help identify efficient query plans and reduce similar redundancy in query answering. However, it is unclear how this can be applied to materialisation and incremental reasoning with recursive Datalog programs. Moreover, hypertree decompositions require additional data structures and thus introduce nonnegligible overhead in both runtime and memory consumption. In this paper, we provide algorithms that exploit hypertree decompositions for the materialisation and incremental evaluation of Datalog programs. Furthermore, we combine this approach with standard Datalog reasoning algorithms in a modular fashion so that the overhead caused by the decompositions is reduced. Our empirical evaluation shows that, when the program contains complex rules, the combined approach is usually significantly faster than the baseline approach, sometimes by orders of magnitude.
IUST_NLP at SemEval-2023 Task 10: Explainable Detecting Sexism with Transformers and Task-adaptive Pretraining
Abstract
This paper describes our system on SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). This work aims to design an automatic system for detecting and classifying sexist content in online spaces. We propose a set of transformer-based pre-trained models with task-adaptive pretraining and ensemble learning. The main contributions of our system include analyzing the performance of different transformer-based pre-trained models and combining these models, as well as providing an efficient method using large amounts of unlabeled data for model adaptive pretraining. We have also explored several other strategies. On the test dataset, our system achieves F1-scores of 83%, 64%, and 47% on subtasks A, B, and C, respectively.
An Imitation Learning Based Algorithm Enabling Priori Knowledge Transfer in Modern Electricity Markets for Bayesian Nash Equilibrium Estimation
Authors: Ziqing Zhu, Ka Wing Chan, Siqi Bu, Ze Hu, Shiwei Xia
Subjects: Computer Science and Game Theory (cs.GT); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Abstract
The Nash Equilibrium (NE) estimation in bidding games of electricity markets is the key concern of both generation companies (GENCOs) for bidding strategy optimization and the Independent System Operator (ISO) for market surveillance. However, existing methods for NE estimation in emerging modern electricity markets (FEM) are inaccurate and inefficient because the priori knowledge of bidding strategies before any environment changes, such as load demand variations, network congestion, and modifications of market design, is not fully utilized. In this paper, a Bayes-adaptive Markov Decision Process in FEM (BAMDP-FEM) is therefore developed to model the GENCOs' bidding strategy optimization considering the priori knowledge. A novel Multi-Agent Generative Adversarial Imitation Learning algorithm (MAGAIL-FEM) is then proposed to enable GENCOs to learn simultaneously from priori knowledge and interactions with changing environments. The obtained NE is a Bayesian Nash Equilibrium (BNE) with priori knowledge transferred from the previous environment. In the case study, the superiority of this proposed algorithm in terms of convergence speed compared with conventional methods is verified. It is concluded that the optimal bidding strategies in the obtained BNE can always lead to more profits than NE due to the effective learning from the priori knowledge. Also, BNE is more accurate and consistent with situations in real-world markets.
GPU-initiated Fine-grained Overlap of Collective Communication with Computation
Authors: Kishore Punniyamurthy, Bradford M. Beckmann, Khaled Hamidouche
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Hardware Architecture (cs.AR)
Abstract
In order to satisfy their ever increasing capacity and compute requirements, many machine learning models are distributed across multiple nodes using space-efficient parallelism strategies. As a result, collective communications are often on the critical path, and hiding their latency by overlapping kernel-granular communication and computation is difficult due to the absence of independent computation. In this work, we propose fusing computation with communication using GPU-initiated networking, and leverage GPUs' massive parallelism to enable fine-grained overlap of the fused operations. We have developed a single, self-contained GPU kernel where workgroups (WGs) immediately communicate their results to remote GPUs when they complete their computation. Meanwhile, other WGs within the same kernel perform overlapping computation, maintaining high ALU utilization. Furthermore, we propose zero-copy optimizations for peer-to-peer GPU communication where the data computed by one GPU is directly written to the destination buffers within the peer GPUs, eliminating intermediate stores and extra buffering. Our approach leverages the emerging multi-node GPU system trend where GPUs are physically close to network with direct GPU-NIC interconnects. We demonstrate our approach by creating an embedding + All-to-All fused kernel which overlaps embedding operations and the dependent all-to-all collective in DLRM models. We evaluate our approach both using simulation and real hardware. Our evaluations show that our approach can effectively overlap All-to-All communication with embedding computations, subsequently reducing their combined execution time by 31% on average (up to 58%) for inter-node and by 25% (up to 35%) for intra-node configurations. Scale-out simulations indicate that our approach reduces DLRM execution time by ~10% for 128 node system.
Watch This Space: Securing Satellite Communication through Resilient Transmitter Fingerprinting
Authors: Joshua Smailes, Sebastian Kohler, Simon Birnbach, Martin Strohmeier, Ivan Martinovic
Subjects: Cryptography and Security (cs.CR); Signal Processing (eess.SP)
Abstract
Due to an increase in the availability of cheap off-the-shelf radio hardware, spoofing and replay attacks on satellite ground systems have become more accessible than ever. This is particularly a problem for legacy systems, many of which do not offer cryptographic security and cannot be patched to support novel security measures. In this paper we explore radio transmitter fingerprinting in satellite systems. We introduce the SatIQ system, proposing novel techniques for authenticating transmissions using characteristics of transmitter hardware expressed as impairments on the downlinked signal. We look in particular at high sample rate fingerprinting, making fingerprints difficult to forge without similarly high sample rate transmitting hardware, thus raising the budget for attacks. We also examine the difficulty of this approach with high levels of atmospheric noise and multipath scattering, and analyze potential solutions to this problem. We focus on the Iridium satellite constellation, for which we collected 1010464 messages at a sample rate of 25 MS/s. We use this data to train a fingerprinting model consisting of an autoencoder combined with a Siamese neural network, enabling the model to learn an efficient encoding of message headers that preserves identifying information. We demonstrate the system's robustness under attack by replaying messages using a Software-Defined Radio, achieving an Equal Error Rate of 0.120, and ROC AUC of 0.946. Finally, we analyze its stability over time by introducing a time gap between training and testing data, and its extensibility by introducing new transmitters which have not been seen before. We conclude that our techniques are useful for building systems that are stable over time, can be used immediately with new transmitters without retraining, and provide robustness against spoofing and replay by raising the required budget for attacks.
Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers
Authors: Firas Khader, Jakob Nikolas Kather, Tianyu Han, Sven Nebelung, Christiane Kuhl, Johannes Stegmaier, Daniel Truhn
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Whole-Slide Imaging allows for the capturing and digitization of high-resolution images of histological specimen. An automated analysis of such images using deep learning models is therefore of high demand. The transformer architecture has been proposed as a possible candidate for effectively leveraging the high-resolution information. Here, the whole-slide image is partitioned into smaller image patches and feature tokens are extracted from these image patches. However, while the conventional transformer allows for a simultaneous processing of a large set of input tokens, the computational demand scales quadratically with the number of input tokens and thus quadratically with the number of image patches. To address this problem we propose a novel cascaded cross-attention network (CCAN) based on the cross-attention mechanism that scales linearly with the number of extracted patches. Our experiments demonstrate that this architecture is at least on-par with and even outperforms other attention-based state-of-the-art methods on two public datasets: On the use-case of lung cancer (TCGA NSCLC) our model reaches a mean area under the receiver operating characteristic (AUC) of 0.970 $\pm$ 0.008 and on renal cancer (TCGA RCC) reaches a mean AUC of 0.985 $\pm$ 0.004. Furthermore, we show that our proposed model is efficient in low-data regimes, making it a promising approach for analyzing whole-slide images in resource-limited settings. To foster research in this direction, we make our code publicly available on GitHub: XXX.
Real-Time Joint Simulation of LiDAR Perception and Motion Planning for Automated Driving
Abstract
Real-time perception and motion planning are two crucial tasks for autonomous driving. While there are many research works focused on improving the performance of perception and motion planning individually, it is still not clear how a perception error may adversely impact the motion planning results. In this work, we propose a joint simulation framework with LiDAR-based perception and motion planning for real-time automated driving. Taking the sensor input from the CARLA simulator with additive noise, a LiDAR perception system is designed to detect and track all surrounding vehicles and to provide precise orientation and velocity information. Next, we introduce a new collision bound representation that relaxes the communication cost between the perception module and the motion planner. A novel collision checking algorithm is implemented using line intersection checking that is more efficient for long distance range in comparing to the traditional method of occupancy grid. We evaluate the joint simulation framework in CARLA for urban driving scenarios. Experiments show that our proposed automated driving system can execute at 25 Hz, which meets the real-time requirement. The LiDAR perception system has high accuracy within 20 meters when evaluated with the ground truth. The motion planning results in consistent safe distance keeping when tested in CARLA urban driving scenarios.
Meta-hallucinator: Towards Few-Shot Cross-Modality Cardiac Image Segmentation
Authors: Ziyuan Zhao, Fangcheng Zhou, Zeng Zeng, Cuntai Guan, S. Kevin Zhou
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Image and Video Processing (eess.IV)
Abstract
Domain shift and label scarcity heavily limit deep learning applications to various medical image analysis tasks. Unsupervised domain adaptation (UDA) techniques have recently achieved promising cross-modality medical image segmentation by transferring knowledge from a label-rich source domain to an unlabeled target domain. However, it is also difficult to collect annotations from the source domain in many clinical applications, rendering most prior works suboptimal with the label-scarce source domain, particularly for few-shot scenarios, where only a few source labels are accessible. To achieve efficient few-shot cross-modality segmentation, we propose a novel transformation-consistent meta-hallucination framework, meta-hallucinator, with the goal of learning to diversify data distributions and generate useful examples for enhancing cross-modality performance. In our framework, hallucination and segmentation models are jointly trained with the gradient-based meta-learning strategy to synthesize examples that lead to good segmentation performance on the target domain. To further facilitate data hallucination and cross-domain knowledge transfer, we develop a self-ensembling model with a hallucination-consistent property. Our meta-hallucinator can seamlessly collaborate with the meta-segmenter for learning to hallucinate with mutual benefits from a combined view of meta-learning and self-ensembling learning. Extensive studies on MM-WHS 2017 dataset for cross-modality cardiac segmentation demonstrate that our method performs favorably against various approaches by a lot in the few-shot UDA scenario.
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Authors: Eshaan Nichani, Alex Damian, Jason D. Lee
Abstract
One of the central questions in the theory of deep learning is to understand how neural networks learn hierarchical features. The ability of deep networks to extract salient features is crucial to both their outstanding generalization ability and the modern deep learning paradigm of pretraining and finetuneing. However, this feature learning process remains poorly understood from a theoretical perspective, with existing analyses largely restricted to two-layer networks. In this work we show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks. We analyze the features learned by a three-layer network trained with layer-wise gradient descent, and present a general purpose theorem which upper bounds the sample complexity and width needed to achieve low test error when the target has specific hierarchical structure. We instantiate our framework in specific statistical learning settings -- single-index models and functions of quadratic features -- and show that in the latter setting three-layer networks obtain a sample complexity improvement over all existing guarantees for two-layer networks. Crucially, this sample complexity improvement relies on the ability of three-layer networks to efficiently learn nonlinear features. We then establish a concrete optimization-based depth separation by constructing a function which is efficiently learnable via gradient descent on a three-layer network, yet cannot be learned efficiently by a two-layer network. Our work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
Self-Chained Image-Language Model for Video Localization and Question Answering
Abstract
Recent studies have shown promising results on utilizing pre-trained image-language models for video question answering. While these image-language models can efficiently bootstrap the representation learning of video-language models, they typically concatenate uniformly sampled video frames as visual inputs without explicit language-aware, temporal modeling. When only a portion of a video input is relevant to the language query, such uniform frame sampling can often lead to missing important visual cues. Although humans often find a video moment to focus on and rewind the moment to answer questions, training a query-aware video moment localizer often requires expensive annotations and high computational costs. To address this issue, we propose Self-Chained Video Localization-Answering (SeViLA), a novel framework that leverages a single image-language model (BLIP-2) to tackle both temporal keyframe localization and QA on videos. SeViLA framework consists of two modules: Localizer and Answerer, where both are parameter-efficiently fine-tuned from BLIP-2. We chain these modules for cascaded inference and self-refinement. First, in the forward chain, the Localizer finds multiple language-aware keyframes in a video, which the Answerer uses to predict the answer. Second, in the reverse chain, the Answerer generates keyframe pseudo-labels to refine the Localizer, alleviating the need for expensive video moment localization annotations. SeViLA outperforms several strong baselines/previous works on five video QA and event prediction tasks, and achieves the state-of-the-art in both fine-tuning (NExT-QA, STAR) and zero-shot (NExT-QA, STAR, How2QA, VLEP) settings. We show a comprehensive analysis, e.g., the impact of Localizer, comparisons of Localizer with other temporal localization models, pre-training/self-refinement of Localizer, and varying the number of keyframes.
Fair Price Discrimination
Authors: Siddhartha Banerjee, Kamesh Munagala, Yiheng Shen, Kangning Wang
Subjects: Computer Science and Game Theory (cs.GT); Data Structures and Algorithms (cs.DS); Theoretical Economics (econ.TH)
Abstract
A seller is pricing identical copies of a good to a stream of unit-demand buyers. Each buyer has a value on the good as his private information. The seller only knows the empirical value distribution of the buyer population and chooses the revenue-optimal price. We consider a widely studied third-degree price discrimination model where an information intermediary with perfect knowledge of the arriving buyer's value sends a signal to the seller, hence changing the seller's posterior and inducing the seller to set a personalized posted price. Prior work of Bergemann, Brooks, and Morris (American Economic Review, 2015) has shown the existence of a signaling scheme that preserves seller revenue, while always selling the item, hence maximizing consumer surplus. In a departure from prior work, we ask whether the consumer surplus generated is fairly distributed among buyers with different values. To this end, we aim to maximize welfare functions that reward more balanced surplus allocations. Our main result is the surprising existence of a novel signaling scheme that simultaneously $8$-approximates all welfare functions that are non-negative, monotonically increasing, symmetric, and concave, compared with any other signaling scheme. Classical examples of such welfare functions include the utilitarian social welfare, the Nash welfare, and the max-min welfare. Such a guarantee cannot be given by any consumer-surplus-maximizing scheme -- which are the ones typically studied in the literature. In addition, our scheme is socially efficient, and has the fairness property that buyers with higher values enjoy higher expected surplus, which is not always the case for existing schemes.
SparseGNV: Generating Novel Views of Indoor Scenes with Sparse Input Views
Authors: Weihao Cheng, Yan-Pei Cao, Ying Shan
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
We study to generate novel views of indoor scenes given sparse input views. The challenge is to achieve both photorealism and view consistency. We present SparseGNV: a learning framework that incorporates 3D structures and image generative models to generate novel views with three modules. The first module builds a neural point cloud as underlying geometry, providing contextual information and guidance for the target novel view. The second module utilizes a transformer-based network to map the scene context and the guidance into a shared latent space and autoregressively decodes the target view in the form of discrete image tokens. The third module reconstructs the tokens into the image of the target view. SparseGNV is trained across a large indoor scene dataset to learn generalizable priors. Once trained, it can efficiently generate novel views of an unseen indoor scene in a feed-forward manner. We evaluate SparseGNV on both real-world and synthetic indoor scenes and demonstrate that it outperforms state-of-the-art methods based on either neural radiance fields or conditional image generation.
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
Abstract
Vision transformers have shown great success due to their high model capabilities. However, their remarkable performance is accompanied by heavy computation costs, which makes them unsuitable for real-time applications. In this paper, we propose a family of high-speed vision transformers named EfficientViT. We find that the speed of existing transformer models is commonly bounded by memory inefficient operations, especially the tensor reshaping and element-wise functions in MHSA. Therefore, we design a new building block with a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN layers, which improves memory efficiency while enhancing channel communication. Moreover, we discover that the attention maps share high similarities across heads, leading to computational redundancy. To address this, we present a cascaded group attention module feeding attention heads with different splits of the full feature, which not only saves computation cost but also improves attention diversity. Comprehensive experiments demonstrate EfficientViT outperforms existing efficient models, striking a good trade-off between speed and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by 1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX format. Code and models are available at https://github.com/microsoft/Cream/tree/main/EfficientViT.
Keyword: faster
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
Abstract
Event-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks and dynamic convolutions to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Experimental results demonstrate that HyperE2VID achieves better reconstruction quality with fewer parameters and faster inference time than the state-of-the-art methods.
Autonomous GIS: the next-generation AI-powered GIS
Abstract
Large Language Models (LLMs), such as ChatGPT, demonstrate a strong understanding of human natural language and have been explored and applied in various fields, including reasoning, creative writing, code generation, translation, and information retrieval. By adopting LLM as the reasoning core, we propose Autonomous GIS, an AI-powered geographic information system (GIS) that leverages the LLM's general abilities in natural language understanding, reasoning and coding for addressing spatial problems with automatic spatial data collection, analysis and visualization. We envision that autonomous GIS will need to achieve five autonomous goals including self-generating, self-organizing, self-verifying, self-executing, and self-growing. We introduce the design principles of autonomous GIS to achieve these five autonomous goals from the aspects of information sufficiency, LLM ability, and agent architecture. We developed a prototype system called LLM-Geo using GPT-4 API in a Python environment, demonstrating what an autonomous GIS looks like and how it delivers expected results without human intervention using two case studies. For both case studies, LLM-Geo successfully returned accurate results, including aggregated numbers, graphs, and maps, significantly reducing manual operation time. Although still lacking several important modules such as logging and code testing, LLM-Geo demonstrates a potential path towards next-generation AI-powered GIS. We advocate for the GIScience community to dedicate more effort to the research and development of autonomous GIS, making spatial analysis easier, faster, and more accessible to a broader audience.
Patch-wise Mixed-Precision Quantization of Vision Transformer
Abstract
As emerging hardware begins to support mixed bit-width arithmetic computation, mixed-precision quantization is widely used to reduce the complexity of neural networks. However, Vision Transformers (ViTs) require complex self-attention computation to guarantee the learning of powerful feature representations, which makes mixed-precision quantization of ViTs still challenging. In this paper, we propose a novel patch-wise mixed-precision quantization (PMQ) for efficient inference of ViTs. Specifically, we design a lightweight global metric, which is faster than existing methods, to measure the sensitivity of each component in ViTs to quantization errors. Moreover, we also introduce a pareto frontier approach to automatically allocate the optimal bit-precision according to the sensitivity. To further reduce the computational complexity of self-attention in inference stage, we propose a patch-wise module to reallocate bit-width of patches in each layer. Extensive experiments on the ImageNet dataset shows that our method greatly reduces the search cost and facilitates the application of mixed-precision quantization to ViTs.
PerFedRec++: Enhancing Personalized Federated Recommendation with Self-Supervised Pre-Training
Authors: Sichun Luo, Yuanzhang Xiao, Xinyi Zhang, Yang Liu, Wenbo Ding, Linqi Song
Abstract
Federated recommendation systems employ federated learning techniques to safeguard user privacy by transmitting model parameters instead of raw user data between user devices and the central server. Nevertheless, the current federated recommender system faces challenges such as heterogeneity and personalization, model performance degradation, and communication bottleneck. Previous studies have attempted to address these issues, but none have been able to solve them simultaneously. In this paper, we propose a novel framework, named PerFedRec++, to enhance the personalized federated recommendation with self-supervised pre-training. Specifically, we utilize the privacy-preserving mechanism of federated recommender systems to generate two augmented graph views, which are used as contrastive tasks in self-supervised graph learning to pre-train the model. Pre-training enhances the performance of federated models by improving the uniformity of representation learning. Also, by providing a better initial state for federated training, pre-training makes the overall training converge faster, thus alleviating the heavy communication burden. We then construct a collaborative graph to learn the client representation through a federated graph neural network. Based on these learned representations, we cluster users into different user groups and learn personalized models for each cluster. Each user learns a personalized model by combining the global federated model, the cluster-level federated model, and its own fine-tuned local model. Experiments on three real-world datasets show that our proposed method achieves superior performance over existing methods.
Integer points in the degree-sequence polytope
Authors: Eleonore Bach, Friedrich Eisenbrand, Rom Pinchasi
Abstract
An integer vector $b \in \mathbb{Z}^d$ is a degree sequence if there exists a hypergraph with vertices ${1,\dots,d}$ such that each $b_i$ is the number of hyperedges containing $i$. The degree-sequence polytope $\mathscr{Z}^d$ is the convex hull of all degree sequences. We show that all but a $2^{-\Omega(d)}$ fraction of integer vectors in the degree sequence polytope are degree sequences. Furthermore, the corresponding hypergraph of these points can be computed in time $2^{O(d)}$ via linear programming techniques. This is substantially faster than the $2^{O(d^2)}$ running time of the current-best algorithm for the degree-sequence problem. We also show that for $d\geq 98$, the degree-sequence polytope $\mathscr{Z}^d$ contains integer points that are not degree sequences. Furthermore, we prove that the linear optimization problem over $\mathscr{Z}^d$ is $\mathrm{NP}$-hard. This complements a recent result of Deza et al. (2018) who provide an algorithm that is polynomial in $d$ and the number of hyperedges.
IVP-VAE: Modeling EHR Time Series with Initial Value Problem Solvers
Abstract
Continuous-time models such as Neural ODEs and Neural Flows have shown promising results in analyzing irregularly sampled time series frequently encountered in electronic health records. Based on these models, time series are typically processed with a hybrid of an initial value problem (IVP) solver and a recurrent neural network within the variational autoencoder architecture. Sequentially solving IVPs makes such models computationally less efficient. In this paper, we propose to model time series purely with continuous processes whose state evolution can be approximated directly by IVPs. This eliminates the need for recurrent computation and enables multiple states to evolve in parallel. We further fuse the encoder and decoder with one IVP solver based on its invertibility, which leads to fewer parameters and faster convergence. Experiments on three real-world datasets show that the proposed approach achieves comparable extrapolation and classification performance while gaining more than one order of magnitude speedup over other continuous-time counterparts.
Abstract
We establish new separations between the power of monotone and general (non-monotone) Boolean circuits: - For every $k \geq 1$, there is a monotone function in ${\sf AC^0}$ that requires monotone circuits of depth $\Omega(\log^k n)$. This significantly extends a classical result of Okol'nishnikova (1982) and Ajtai and Gurevich (1987). In addition, our separation holds for a monotone graph property, which was unknown even in the context of ${\sf AC^0}$ versus ${\sf mAC^0}$. - For every $k \geq 1$, there is a monotone function in ${\sf AC^0}[\oplus]$ that requires monotone circuits of size $\exp(\Omega(\log^k n))$. This makes progress towards a question posed by Grigni and Sipser (1992). These results show that constant-depth circuits can be more efficient than monotone circuits when computing monotone functions. In the opposite direction, we observe that non-trivial simulations are possible in the absence of parity gates: every monotone function computed by an ${\sf AC^0}$ circuit of size $s$ and depth $d$ can be computed by a monotone circuit of size $2^{n - n/O(\log s)^{d-1}}$. We show that the existence of significantly faster monotone simulations would lead to breakthrough circuit lower bounds. In particular, if every monotone function in ${\sf AC^0}$ admits a polynomial size monotone circuit, then ${\sf NC^2}$ is not contained in ${\sf NC^1}$ . Finally, we revisit our separation result against monotone circuit size and investigate the limits of our approach, which is based on a monotone lower bound for constraint satisfaction problems established by G\"o\"os et al. (2019) via lifting techniques. Adapting results of Schaefer (1978) and Allender et al. (2009), we obtain an unconditional classification of the monotone circuit complexity of Boolean-valued CSPs via their polymorphisms. This result and the consequences we derive from it might be of independent interest.
Enhancing Datalog Reasoning with Hypertree Decompositions
Authors: Xinyue Zhang, Pan Hu, Yavor Nenov, Ian Horrocks
Abstract
Datalog reasoning based on the semina\"ive evaluation strategy evaluates rules using traditional join plans, which often leads to redundancy and inefficiency in practice, especially when the rules are complex. Hypertree decompositions help identify efficient query plans and reduce similar redundancy in query answering. However, it is unclear how this can be applied to materialisation and incremental reasoning with recursive Datalog programs. Moreover, hypertree decompositions require additional data structures and thus introduce nonnegligible overhead in both runtime and memory consumption. In this paper, we provide algorithms that exploit hypertree decompositions for the materialisation and incremental evaluation of Datalog programs. Furthermore, we combine this approach with standard Datalog reasoning algorithms in a modular fashion so that the overhead caused by the decompositions is reduced. Our empirical evaluation shows that, when the program contains complex rules, the combined approach is usually significantly faster than the baseline approach, sometimes by orders of magnitude.
Adaptive Graduated Nonconvexity Loss
Authors: Kyungmin Jung, Thomas Hitchcox, James Richard Forbes
Abstract
Many problems in robotics, such as estimating the state from noisy sensor data or aligning two LiDAR point clouds, can be posed and solved as least-squares problems. Unfortunately, vanilla nonminimal solvers for least-squares problems are notoriously sensitive to outliers. As such, various robust loss functions have been proposed to reduce the sensitivity to outliers. Examples of loss functions include pseudo-Huber, Cauchy, and Geman-McClure. Recently, these loss functions have been generalized into a single loss function that enables the best loss function to be found adaptively based on the distribution of the residuals. However, even with the generalized robust loss function, most nonminimal solvers can only be solved locally given a prior state estimate due to the nonconvexity of the problem. The first contribution of this paper is to combine graduated nonconvexity (GNC) with the generalized robust loss function to solve least-squares problems without a prior state estimate and without the need to specify a loss function. Moreover, existing loss functions, including the generalized loss function, are based on Gaussian-like distribution. However, residuals are often defined as the squared norm of a multivariate error and distributed in a Chi-like fashion. The second contribution of this paper is to apply a norm-aware adaptive robust loss function within a GNC framework. This leads to additional robustness when compared with state-of-the-art methods. Simulations and experiments demonstrate that the proposed approach is more robust and yields faster convergence times compared to other GNC formulations.
CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model
Authors: Zhen Ye, Wei Xue, Xu Tan, Jie Chen, Qifeng Liu, Yike Guo
Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Multimedia (cs.MM); Audio and Speech Processing (eess.AS)
Abstract
Denoising diffusion probabilistic models (DDPMs) have shown promising performance for speech synthesis. However, a large number of iterative steps are required to achieve high sample quality, which restricts the inference speed. Maintaining sample quality while increasing sampling speed has become a challenging task. In this paper, we propose a "Co"nsistency "Mo"del-based "Speech" synthesis method, CoMoSpeech, which achieve speech synthesis through a single diffusion sampling step while achieving high audio quality. The consistency constraint is applied to distill a consistency model from a well-designed diffusion-based teacher model, which ultimately yields superior performances in the distilled CoMoSpeech. Our experiments show that by generating audio recordings by a single sampling step, the CoMoSpeech achieves an inference speed more than 150 times faster than real-time on a single NVIDIA A100 GPU, which is comparable to FastSpeech2, making diffusion-sampling based speech synthesis truly practical. Meanwhile, objective and subjective evaluations on text-to-speech and singing voice synthesis show that the proposed teacher models yield the best audio quality, and the one-step sampling based CoMoSpeech achieves the best inference speed with better or comparable audio quality to other conventional multi-step diffusion model baselines. Audio samples are available at https://comospeech.github.io/.
Decentralization and Acceleration Enables Large-Scale Bundle Adjustment
Authors: Taosha Fan, Joseph Ortiz, Ming Hsiao, Maurizio Monge, Jing Dong, Todd Murphey, Mustafa Mukadam
Subjects: Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Optimization and Control (math.OC)
Abstract
Scaling to arbitrarily large bundle adjustment problems requires data and compute to be distributed across multiple devices. Centralized methods in prior works are only able to solve small or medium size problems due to overhead in computation and communication. In this paper, we present a fully decentralized method that alleviates computation and communication bottlenecks to solve arbitrarily large bundle adjustment problems. We achieve this by reformulating the reprojection error and deriving a novel surrogate function that decouples optimization variables from different devices. This function makes it possible to use majorization minimization techniques and reduces bundle adjustment to independent optimization subproblems that can be solved in parallel. We further apply Nesterov's acceleration and adaptive restart to improve convergence while maintaining its theoretical guarantees. Despite limited peer-to-peer communication, our method has provable convergence to first-order critical points under mild conditions. On extensive benchmarks with public datasets, our method converges much faster than decentralized baselines with similar memory usage and communication load. Compared to centralized baselines using a single device, our method, while being decentralized, yields more accurate solutions with significant speedups of up to 940.7x over Ceres and 175.2x over DeepLM. Code: https://github.com/facebookresearch/DBA.
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
Abstract
Vision transformers have shown great success due to their high model capabilities. However, their remarkable performance is accompanied by heavy computation costs, which makes them unsuitable for real-time applications. In this paper, we propose a family of high-speed vision transformers named EfficientViT. We find that the speed of existing transformer models is commonly bounded by memory inefficient operations, especially the tensor reshaping and element-wise functions in MHSA. Therefore, we design a new building block with a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN layers, which improves memory efficiency while enhancing channel communication. Moreover, we discover that the attention maps share high similarities across heads, leading to computational redundancy. To address this, we present a cascaded group attention module feeding attention heads with different splits of the full feature, which not only saves computation cost but also improves attention diversity. Comprehensive experiments demonstrate EfficientViT outperforms existing efficient models, striking a good trade-off between speed and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by 1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX format. Code and models are available at https://github.com/microsoft/Cream/tree/main/EfficientViT.
Keyword: mobile
PriGen: Towards Automated Translation of Android Applications' Code to Privacy Captions
Abstract
Mobile applications are required to give privacy notices to the users when they collect or share personal information. Creating consistent and concise privacy notices can be a challenging task for developers. Previous work has attempted to help developers create privacy notices through a questionnaire or predefined templates. In this paper, we propose a novel approach and a framework, called PriGen, that extends these prior work. PriGen uses static analysis to identify Android applications' code segments which process sensitive information (i.e. permission-requiring code segments) and then leverages a Neural Machine Translation model to translate them into privacy captions. We present the initial evaluation of our translation task for $\sim$300,000 code segments.
Full-Spectrum Wireless Communications for 6G and Beyond: From Microwave, Millimeter-Wave, Terahertz to Lightwave
Authors: Wei Jiang, Hans D. Schotten
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
Abstract
As of today, 5G is rolling out across the world, but academia and industry have shifted their attention to the sixth generation (6G) cellular technology for a full-digitalized, intelligent society in 2030 and beyond. 6G demands far more bandwidth to support extreme performance, exacerbating the problem of spectrum shortage in mobile communications. In this context, this paper proposes a novel concept coined Full-Spectrum Wireless Communications (FSWC). It makes use of all communication-feasible spectral resources over the whole electromagnetic (EW) spectrum, from microwave, millimeter wave, terahertz (THz), infrared light, visible light, to ultraviolet light. FSWC not only provides sufficient bandwidth but also enables new paradigms taking advantage of peculiarities on different EW bands. This paper will define FSWC, justify its necessity for 6G, and then discuss the opportunities and challenges of exploiting THz and optical bands.
Multi-Tier Client Selection for Mobile Federated Learning Networks
Authors: Yulan Gao, Yansong Zhao, Han Yu
Subjects: Machine Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC); Networking and Internet Architecture (cs.NI)
Abstract
Federated learning (FL), which addresses data privacy issues by training models on resource-constrained mobile devices in a distributed manner, has attracted significant research attention. However, the problem of optimizing FL client selection in mobile federated learning networks (MFLNs), where devices move in and out of each others' coverage and no FL server knows all the data owners, remains open. To bridge this gap, we propose a first-of-its-kind \underline{Soc}ially-aware \underline{Fed}erated \underline{C}lient \underline{S}election (SocFedCS) approach to minimize costs and train high-quality FL models. SocFedCS enriches the candidate FL client pool by enabling data owners to propagate FL task information through their local networks of trust, even as devices are moving into and out of each others' coverage. Based on Lyapunov optimization, we first transform this time-coupled problem into a step-by-step optimization problem. Then, we design a method based on alternating minimization and self-adaptive global best harmony search to solve this mixed-integer optimization problem. Extensive experiments comparing SocFedCS against five state-of-the-art approaches based on four real-world multimedia datasets demonstrate that it achieves 2.06\% higher test accuracy and 12.24\% lower cost on average than the best-performing baseline.
The NetMob23 Dataset: A High-resolution Multi-region Service-level Mobile Data Traffic Cartography
Authors: Orlando E. Martínez-Durive, Sachit Mishra, Cezary Ziemlicki, Stefania Rubrichi, Zbigniew Smoreda, Marco Fiore
Subjects: Networking and Internet Architecture (cs.NI)
Abstract
Digital sources have been enabling unprecedented data-driven and large-scale investigations across a wide range of domains, including demography, sociology, geography, urbanism, criminology, and engineering. A major barrier to innovation is represented by the limited availability of dependable digital datasets, especially in the context of data gathered by mobile network operators or service providers, due to concerns about user privacy and industrial competition. The resulting lack of reference datasets curbs the production of new research methods and results, and prevents verifiability and reproducibility of research outcomes. The NetMob23 dataset offers a rare opportunity to the multidisciplinary research community to access rich data about the spatio-temporal consumption of mobile applications in a developed country. The generation process of the dataset sets a new quality standard, leading to information about the demands generated by 68 popular mobile services, geo-referenced at a high resolution of $100\times100$ $m^2$ over 20 metropolitan areas in France, and monitored during 77 consecutive days in 2019.
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
Abstract
Vision transformers have shown great success due to their high model capabilities. However, their remarkable performance is accompanied by heavy computation costs, which makes them unsuitable for real-time applications. In this paper, we propose a family of high-speed vision transformers named EfficientViT. We find that the speed of existing transformer models is commonly bounded by memory inefficient operations, especially the tensor reshaping and element-wise functions in MHSA. Therefore, we design a new building block with a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN layers, which improves memory efficiency while enhancing channel communication. Moreover, we discover that the attention maps share high similarities across heads, leading to computational redundancy. To address this, we present a cascaded group attention module feeding attention heads with different splits of the full feature, which not only saves computation cost but also improves attention diversity. Comprehensive experiments demonstrate EfficientViT outperforms existing efficient models, striking a good trade-off between speed and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by 1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX format. Code and models are available at https://github.com/microsoft/Cream/tree/main/EfficientViT.
Keyword: pruning
Securing Distributed SGD against Gradient Leakage Threats
Abstract
This paper presents a holistic approach to gradient leakage resilient distributed Stochastic Gradient Descent (SGD). First, we analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise. We analyze the inherent limitations of these approaches and their underlying impact on privacy guarantee, model accuracy, and attack resilience. Next, we present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool. Unlike conventional methods with the per-client federated noise injection and fixed noise parameter strategy, our approach keeps track of the trend of per-example gradient updates. It makes adaptive noise injection closely aligned throughout the federated model training. Finally, we provide an empirical privacy analysis on the privacy guarantee, model utility, and attack resilience of the proposed approach. Extensive evaluation using five benchmark datasets demonstrates that our gradient leakage resilient approach can outperform the state-of-the-art methods with competitive accuracy performance, strong differential privacy guarantee, and high resilience against gradient leakage attacks. The code associated with this paper can be found: https://github.com/git-disl/Fed-alphaCDP.
SMART: Self-Morphing Anytime Replanning Tree
Authors: Zongyuan Shen, James P. Wilson, Shalabh Gupta, Ryan Harvey
Subjects: Robotics (cs.RO); Systems and Control (eess.SY)
Abstract
The paper presents an algorithm, called Self- Morphing Anytime Replanning Tree (SMART), that facilitates anytime replanning in dynamic environments. SMART performs risk-based tree-pruning if its current path is obstructed by nearby moving obstacle(s), resulting in multiple disjoint subtrees. Then, for speedy recovery, it exploits these subtrees and performs informed tree-repair at hot-spots that lie at the intersection of subtrees to find a new path. The performance of SMART is comparatively evaluated with seven existing algorithms through extensive simulations. Two scenarios are considered with: 1) dynamic obstacles and 2) both static and dynamic obstacles. The results show that SMART yields significant improvements in replanning time, success rate and travel time. Finally, the performance of SMART is validated by a real laboratory experiment.
Keyword: voxel
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
Abstract
Event-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks and dynamic convolutions to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Experimental results demonstrate that HyperE2VID achieves better reconstruction quality with fewer parameters and faster inference time than the state-of-the-art methods.
PVT-SSD: Single-Stage 3D Object Detector with Point-Voxel Transformer
Abstract
Recent Transformer-based 3D object detectors learn point cloud features either from point- or voxel-based representations. However, the former requires time-consuming sampling while the latter introduces quantization errors. In this paper, we present a novel Point-Voxel Transformer for single-stage 3D detection (PVT-SSD) that takes advantage of these two representations. Specifically, we first use voxel-based sparse convolutions for efficient feature encoding. Then, we propose a Point-Voxel Transformer (PVT) module that obtains long-range contexts in a cheap manner from voxels while attaining accurate positions from points. The key to associating the two different representations is our introduced input-dependent Query Initialization module, which could efficiently generate reference points and content queries. Then, PVT adaptively fuses long-range contextual and local geometric information around reference points into content queries. Further, to quickly find the neighboring points of reference points, we design the Virtual Range Image module, which generalizes the native range image to multi-sensor and multi-frame. The experiments on several autonomous driving benchmarks verify the effectiveness and efficiency of the proposed method. Code will be available at https://github.com/Nightmare-n/PVT-SSD.
Keyword: lidar
DeepSTEP -- Deep Learning-Based Spatio-Temporal End-To-End Perception for Autonomous Vehicles
Authors: Sebastian Huch, Florian Sauerbeck, Johannes Betz
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Autonomous vehicles demand high accuracy and robustness of perception algorithms. To develop efficient and scalable perception algorithms, the maximum information should be extracted from the available sensor data. In this work, we present our concept for an end-to-end perception architecture, named DeepSTEP. The deep learning-based architecture processes raw sensor data from the camera, LiDAR, and RaDAR, and combines the extracted data in a deep fusion network. The output of this deep fusion network is a shared feature space, which is used by perception head networks to fulfill several perception tasks, such as object detection or local mapping. DeepSTEP incorporates multiple ideas to advance state of the art: First, combining detection and localization into a single pipeline allows for efficient processing to reduce computational overhead and further improves overall performance. Second, the architecture leverages the temporal domain by using a self-attention mechanism that focuses on the most important features. We believe that our concept of DeepSTEP will advance the development of end-to-end perception systems. The network will be deployed on our research vehicle, which will be used as a platform for data collection, real-world testing, and validation. In conclusion, DeepSTEP represents a significant advancement in the field of perception for autonomous vehicles. The architecture's end-to-end design, time-aware attention mechanism, and integration of multiple perception tasks make it a promising solution for real-world deployment. This research is a work in progress and presents the first concept of establishing a novel perception pipeline.
Adaptive Graduated Nonconvexity Loss
Authors: Kyungmin Jung, Thomas Hitchcox, James Richard Forbes
Abstract
Many problems in robotics, such as estimating the state from noisy sensor data or aligning two LiDAR point clouds, can be posed and solved as least-squares problems. Unfortunately, vanilla nonminimal solvers for least-squares problems are notoriously sensitive to outliers. As such, various robust loss functions have been proposed to reduce the sensitivity to outliers. Examples of loss functions include pseudo-Huber, Cauchy, and Geman-McClure. Recently, these loss functions have been generalized into a single loss function that enables the best loss function to be found adaptively based on the distribution of the residuals. However, even with the generalized robust loss function, most nonminimal solvers can only be solved locally given a prior state estimate due to the nonconvexity of the problem. The first contribution of this paper is to combine graduated nonconvexity (GNC) with the generalized robust loss function to solve least-squares problems without a prior state estimate and without the need to specify a loss function. Moreover, existing loss functions, including the generalized loss function, are based on Gaussian-like distribution. However, residuals are often defined as the squared norm of a multivariate error and distributed in a Chi-like fashion. The second contribution of this paper is to apply a norm-aware adaptive robust loss function within a GNC framework. This leads to additional robustness when compared with state-of-the-art methods. Simulations and experiments demonstrate that the proposed approach is more robust and yields faster convergence times compared to other GNC formulations.
Rhino: An Autonomous Robot for Mapping Underground Mine Environments
Authors: Christopher Tatsch, Jonas Amoama Bredu Jnr, Dylan Covell, Ihsan Berk Tulu, Yu Gu
Abstract
There are many benefits for exploring and exploiting underground mines, but there are also significant risks and challenges. One such risk is the potential for accidents caused by the collapse of the pillars, and roofs which can be mitigated through inspections. However, these inspections can be costly and may put the safety of the inspectors at risk. To address this issue, this work presents Rhino, an autonomous robot that can navigate underground mine environments and generate 3D maps. These generated maps will allow mine workers to proactively respond to potential hazards and prevent accidents. The system being developed is a skid-steer, four-wheeled unmanned ground vehicle (UGV) that uses a LiDAR and IMU to perform long-duration autonomous navigation and generation of maps through a LIO-SAM framework. The system has been tested in different environments and terrains to ensure its robustness and ability to operate for extended periods of time while also generating 3D maps.
Real-Time Joint Simulation of LiDAR Perception and Motion Planning for Automated Driving
Abstract
Real-time perception and motion planning are two crucial tasks for autonomous driving. While there are many research works focused on improving the performance of perception and motion planning individually, it is still not clear how a perception error may adversely impact the motion planning results. In this work, we propose a joint simulation framework with LiDAR-based perception and motion planning for real-time automated driving. Taking the sensor input from the CARLA simulator with additive noise, a LiDAR perception system is designed to detect and track all surrounding vehicles and to provide precise orientation and velocity information. Next, we introduce a new collision bound representation that relaxes the communication cost between the perception module and the motion planner. A novel collision checking algorithm is implemented using line intersection checking that is more efficient for long distance range in comparing to the traditional method of occupancy grid. We evaluate the joint simulation framework in CARLA for urban driving scenarios. Experiments show that our proposed automated driving system can execute at 25 Hz, which meets the real-time requirement. The LiDAR perception system has high accuracy within 20 meters when evaluated with the ground truth. The motion planning results in consistent safe distance keeping when tested in CARLA urban driving scenarios.
Keyword: diffusion
Analyzing Bias in Diffusion-based Face Generation Models
Authors: Malsha V. Perera, Vishal M. Patel
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Diffusion models are becoming increasingly popular in synthetic data generation and image editing applications. However, these models can amplify existing biases and propagate them to downstream applications. Therefore, it is crucial to understand the sources of bias in their outputs. In this paper, we investigate the presence of bias in diffusion-based face generation models with respect to attributes such as gender, race, and age. Moreover, we examine how dataset size affects the attribute composition and perceptual quality of both diffusion and Generative Adversarial Network (GAN) based face generation models across various attribute classes. Our findings suggest that diffusion models tend to worsen distribution bias in the training data for various attributes, which is heavily influenced by the size of the dataset. Conversely, GAN models trained on balanced datasets with a larger number of samples show less bias across different attributes.
Undercover Deepfakes: Detecting Fake Segments in Videos
Abstract
The recent renaissance in generative models, driven primarily by the advent of diffusion models and iterative improvement in GAN methods, has enabled many creative applications. However, each advancement is also accompanied by a rise in the potential for misuse. In the arena of deepfake generation this is a key societal issue. In particular, the ability to modify segments of videos using such generative techniques creates a new paradigm of deepfakes which are mostly real videos altered slightly to distort the truth. Current deepfake detection methods in the academic literature are not evaluated on this paradigm. In this paper, we present a deepfake detection method able to address this issue by performing both frame and video level deepfake prediction. To facilitate testing our method we create a new benchmark dataset where videos have both real and fake frame sequences. Our method utilizes the Vision Transformer, Scaling and Shifting pretraining and Timeseries Transformer to temporally segment videos to help facilitate the interpretation of possible deepfakes. Extensive experiments on a variety of deepfake generation methods show excellent results on temporal segmentation and classical video level predictions as well. In particular, the paradigm we introduce will form a powerful tool for the moderation of deepfakes, where human oversight can be better targeted to the parts of videos suspected of being deepfakes. All experiments can be reproduced at: https://github.com/sanjaysaha1311/temporal-deepfake-segmentation.
Null-text Guidance in Diffusion Models is Secretly a Cartoon-style Creator
Authors: Jing Zhao, Heliang Zheng, Chaoyue Wang, Long Lan, Wanrong Huang, Wenjing Yang
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Classifier-free guidance is an effective sampling technique in diffusion models that has been widely adopted. The main idea is to extrapolate the model in the direction of text guidance and away from null-text guidance. In this paper, we demonstrate that null-text guidance in diffusion models is secretly a cartoon-style creator, i.e., the generated images can be efficiently transformed into cartoons by simply perturbing the null-text guidance. Specifically, we proposed two disturbance methods, i.e., Rollback disturbance (Back-D) and Image disturbance (Image-D), to construct misalignment between the noisy images used for predicting null-text guidance and text guidance (subsequently referred to as \textbf{null-text noisy image} and \textbf{text noisy image} respectively) in the sampling process. Back-D achieves cartoonization by altering the noise level of null-text noisy image via replacing $xt$ with $x{t+\Delta t}$. Image-D, alternatively, produces high-fidelity, diverse cartoons by defining $x_t$ as a clean input image, which further improves the incorporation of finer image details. Through comprehensive experiments, we delved into the principle of noise disturbing for null-text and uncovered that the efficacy of disturbance depends on the correlation between the null-text noisy image and the source image. Moreover, our proposed techniques, which can generate cartoon images and cartoonize specific ones, are training-free and easily integrated as a plug-and-play component in any classifier-free guided diffusion model. Project page is available at \url{https://nulltextforcartoon.github.io/}.
Multigrid preconditioning of singularly perturbed convection-diffusion equations
Authors: M. Shahid, S.P. MacLachlan, H. bin Zubair Syed
Abstract
Boundary value problems based on the convection-diffusion equation arise naturally in models of fluid flow across a variety of engineering applications and design feasibility studies. Naturally, their efficient numerical solution has continued to be an interesting and active topic of research for decades. In the context of finite-element discretization of these boundary value problems, the Streamline Upwind Petrov-Galerkin (SUPG) technique yields accurate discretization in the singularly perturbed regime. In this paper, we propose efficient multigrid iterative solution methods for the resulting linear systems. In particular, we show that techniques from standard multigrid for anisotropic problems can be adapted to these discretizations on both tensor-product as well as semi-structured meshes. The resulting methods are demonstrated to be robust preconditioners for several standard flow benchmarks.
CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model
Authors: Zhen Ye, Wei Xue, Xu Tan, Jie Chen, Qifeng Liu, Yike Guo
Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Multimedia (cs.MM); Audio and Speech Processing (eess.AS)
Abstract
Denoising diffusion probabilistic models (DDPMs) have shown promising performance for speech synthesis. However, a large number of iterative steps are required to achieve high sample quality, which restricts the inference speed. Maintaining sample quality while increasing sampling speed has become a challenging task. In this paper, we propose a "Co"nsistency "Mo"del-based "Speech" synthesis method, CoMoSpeech, which achieve speech synthesis through a single diffusion sampling step while achieving high audio quality. The consistency constraint is applied to distill a consistency model from a well-designed diffusion-based teacher model, which ultimately yields superior performances in the distilled CoMoSpeech. Our experiments show that by generating audio recordings by a single sampling step, the CoMoSpeech achieves an inference speed more than 150 times faster than real-time on a single NVIDIA A100 GPU, which is comparable to FastSpeech2, making diffusion-sampling based speech synthesis truly practical. Meanwhile, objective and subjective evaluations on text-to-speech and singing voice synthesis show that the proposed teacher models yield the best audio quality, and the one-step sampling based CoMoSpeech achieves the best inference speed with better or comparable audio quality to other conventional multi-step diffusion model baselines. Audio samples are available at https://comospeech.github.io/.
Exploiting Diffusion Prior for Real-World Image Super-Resolution
Abstract
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution (SR). Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we introduce a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches.
Keyword: dynamic
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
Abstract
Event-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks and dynamic convolutions to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Experimental results demonstrate that HyperE2VID achieves better reconstruction quality with fewer parameters and faster inference time than the state-of-the-art methods.
Abstract
Dynamical Ising machines are continuous dynamical systems evolving from a generic initial state to a state strongly related to the ground state of the classical Ising model on a graph. Reaching the ground state is equivalent to finding the maximum (weighted) cut of the graph, which presents the Ising machines as an alternative way to solving and investigating NP-complete problems. Among the dynamical models driving the Ising machines, relaxation-based models are especially interesting because of their relations with guarantees of performance achieved in time scaling polynomially with the problem size. However, the terminal states of such machines are essentially non-binary, which necessitates special post-processing relying on disparate computing. We show that an Ising machine implementing a special dynamical system (called \mdII{}) solves the rounding problem dynamically. We prove that the \mdII-machine starting from an arbitrary non-binary state terminates in a state, which trivially rounds to a binary state with the cut at least as big as obtained after the optimal rounding of the initial state. Besides showing that relaxation-based dynamical Ising machines can be made self-contained, our findings demonstrate that dynamical systems can directly perform complex information processing tasks.
Planning a Community Approach to Diabetes Care in Low- and Middle-Income Countries Using Optimization
Authors: Katherine B. Adams, Justin J. Boutilier, Sarang Deo, Yonatan Mintz
Subjects: Artificial Intelligence (cs.AI); Systems and Control (eess.SY); Optimization and Control (math.OC)
Abstract
Diabetes is a global health priority, especially in low- and-middle-income countries, where over 50% of premature deaths are attributed to high blood glucose. Several studies have demonstrated the feasibility of using Community Health Worker (CHW) programs to provide affordable and culturally tailored solutions for early detection and management of diabetes. Yet, scalable models to design and implement CHW programs while accounting for screening, management, and patient enrollment decisions have not been proposed. We introduce an optimization framework to determine personalized CHW visits that maximize glycemic control at a community-level. Our framework explicitly models the trade-off between screening new patients and providing management visits to individuals who are already enrolled in treatment. We account for patients' motivational states, which affect their decisions to enroll or drop out of treatment and, therefore, the effectiveness of the intervention. We incorporate these decisions by modeling patients as utility-maximizing agents within a bi-level provider problem that we solve using approximate dynamic programming. By estimating patients' health and motivational states, our model builds visit plans that account for patients' tradeoffs when deciding to enroll in treatment, leading to reduced dropout rates and improved resource allocation. We apply our approach to generate CHW visit plans using operational data from a social enterprise serving low-income neighborhoods in urban areas of India. Through extensive simulation experiments, we find that our framework requires up to 73.4% less capacity than the best naive policy to achieve the same performance in terms of glycemic control. Our experiments also show that our solution algorithm can improve upon naive policies by up to 124.5% using the same CHW capacity.
Dynamic Graph Representation Learning for Depression Screening with Transformer
Authors: Ai-Te Kuo, Haiquan Chen, Yu-Hsuan Kuo, Wei-Shinn Ku
Subjects: Machine Learning (cs.LG); Information Retrieval (cs.IR); Social and Information Networks (cs.SI)
Abstract
Early detection of mental disorder is crucial as it enables prompt intervention and treatment, which can greatly improve outcomes for individuals suffering from debilitating mental affliction. The recent proliferation of mental health discussions on social media platforms presents research opportunities to investigate mental health and potentially detect instances of mental illness. However, existing depression detection methods are constrained due to two major limitations: (1) the reliance on feature engineering and (2) the lack of consideration for time-varying factors. Specifically, these methods require extensive feature engineering and domain knowledge, which heavily rely on the amount, quality, and type of user-generated content. Moreover, these methods ignore the important impact of time-varying factors on depression detection, such as the dynamics of linguistic patterns and interpersonal interactive behaviors over time on social media (e.g., replies, mentions, and quote-tweets). To tackle these limitations, we propose an early depression detection framework, ContrastEgo treats each user as a dynamic time-evolving attributed graph (ego-network) and leverages supervised contrastive learning to maximize the agreement of users' representations at different scales while minimizing the agreement of users' representations to differentiate between depressed and control groups. ContrastEgo embraces four modules, (1) constructing users' heterogeneous interactive graphs, (2) extracting the representations of users' interaction snapshots using graph neural networks, (3) modeling the sequences of snapshots using attention mechanism, and (4) depression detection using contrastive learning. Extensive experiments on Twitter data demonstrate that ContrastEgo significantly outperforms the state-of-the-art methods in terms of all the effectiveness metrics in various experimental settings.
Continual Facial Expression Recognition: A Benchmark
Authors: Nikhil Churamani, Tolga Dimlioglu, German I. Parisi, Hatice Gunes
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Understanding human affective behaviour, especially in the dynamics of real-world settings, requires Facial Expression Recognition (FER) models to continuously adapt to individual differences in user expression, contextual attributions, and the environment. Current (deep) Machine Learning (ML)-based FER approaches pre-trained in isolation on benchmark datasets fail to capture the nuances of real-world interactions where data is available only incrementally, acquired by the agent or robot during interactions. New learning comes at the cost of previous knowledge, resulting in catastrophic forgetting. Lifelong or Continual Learning (CL), on the other hand, enables adaptability in agents by being sensitive to changing data distributions, integrating new information without interfering with previously learnt knowledge. Positing CL as an effective learning paradigm for FER, this work presents the Continual Facial Expression Recognition (ConFER) benchmark that evaluates popular CL techniques on FER tasks. It presents a comparative analysis of several CL-based approaches on popular FER datasets such as CK+, RAF-DB, and AffectNet and present strategies for a successful implementation of ConFER for Affective Computing (AC) research. CL techniques, under different learning settings, are shown to achieve state-of-the-art (SOTA) performance across several datasets, thus motivating a discussion on the benefits of applying CL principles towards human behaviour understanding, particularly from facial expressions, as well the challenges entailed.
Perpetual Humanoid Control for Real-time Simulated Avatars
Abstract
We present a physics-based humanoid controller that achieves high-fidelity motion imitation and fault-tolerant behavior in the presence of noisy input (e.g. pose estimates from video or generated from language) and unexpected falls. Our controller scales up to learning ten thousand motion clips without using any external stabilizing forces and learns to naturally recover from fail-state. Given reference motion, our controller can perpetually control simulated avatars without requiring resets. At its core, we propose the progressive multiplicative control policy (PMCP), which dynamically allocates new network capacity to learn harder and harder motion sequences. PMCP allows efficient scaling for learning from large-scale motion databases and adding new tasks, such as fail-state recovery, without catastrophic forgetting. We demonstrate the effectiveness of our controller by using it to imitate noisy poses from video-based pose estimators and language-based motion generators in a live and real-time multi-person avatar use case.
Adaptive Molecular Communication Receivers with Tunable Ligand-Receptor Interactions
Abstract
Molecular Communications (MC) underpins signaling in biological systems, enabling information transfer through biochemical molecules. The prospect of engineering this natural communication mechanism has inspired the Internet of Bio-Nano Things (IoBNT) applications, which rely on heterogeneous collaborative networks of natural and engineered biological devices, as well as artificial micro/nanomachines. A key attribute of natural MC systems is their adaptability, ensuring accurate information transmission in dynamic, time-varying biochemical environments. Therefore, integrating biological adaptation techniques into artificial MC networks, which are expected to operate in various biochemical environments, such as inside human body, is essential for robust and biocompatible IoBNT applications. This study explores the design of bio-inspired adaptive MC receivers capable of tuning their response functions for maintaining optimal detection performance in scenarios with time-varying received signals. The proposed receiver architectures are based on ligand-receptor interactions, with adaptivity achieved by modifying the sigmoidal-shaped ligand-receptor response curve in response to fluctuations in received signal statistics. The performance of these adaptive receivers is evaluated across a range of MC scenarios, including those with stochastic background interference, inter-symbol interference (ISI), and degrading enzymes, which involve time-varying scaling or shifting of received signals. Numerical results demonstrate the significant improvement in detection performance provided by adaptive receivers in dynamic MC scenarios.
SMART: Self-Morphing Anytime Replanning Tree
Authors: Zongyuan Shen, James P. Wilson, Shalabh Gupta, Ryan Harvey
Subjects: Robotics (cs.RO); Systems and Control (eess.SY)
Abstract
The paper presents an algorithm, called Self- Morphing Anytime Replanning Tree (SMART), that facilitates anytime replanning in dynamic environments. SMART performs risk-based tree-pruning if its current path is obstructed by nearby moving obstacle(s), resulting in multiple disjoint subtrees. Then, for speedy recovery, it exploits these subtrees and performs informed tree-repair at hot-spots that lie at the intersection of subtrees to find a new path. The performance of SMART is comparatively evaluated with seven existing algorithms through extensive simulations. Two scenarios are considered with: 1) dynamic obstacles and 2) both static and dynamic obstacles. The results show that SMART yields significant improvements in replanning time, success rate and travel time. Finally, the performance of SMART is validated by a real laboratory experiment.
State Constrained Stochastic Optimal Control for Continuous and Hybrid Dynamical Systems Using DFBSDE
Authors: Bolun Dai, Prashanth Krishnamurthy, Andrew Papanicolaou, Farshad Khorrami
Abstract
We develop a computationally efficient learning-based forward-backward stochastic differential equations (FBSDE) controller for both continuous and hybrid dynamical (HD) systems subject to stochastic noise and state constraints. Solutions to stochastic optimal control (SOC) problems satisfy the Hamilton-Jacobi-Bellman (HJB) equation. Using current FBSDE-based solutions, the optimal control can be obtained from the HJB equations using deep neural networks (e.g., long short-term memory (LSTM) networks). To ensure the learned controller respects the constraint boundaries, we enforce the state constraints using a soft penalty function. In addition to previous works, we adapt the deep FBSDE (DFBSDE) control framework to handle HD systems consisting of continuous dynamics and a deterministic discrete state change. We demonstrate our proposed algorithm in simulation on a continuous nonlinear system (cart-pole) and a hybrid nonlinear system (five-link biped).
A fast topological approach for predicting anomalies in time-varying graphs
Abstract
Large time-varying graphs are increasingly common in financial, social and biological settings. Feature extraction that efficiently encodes the complex structure of sparse, multi-layered, dynamic graphs presents computational and methodological challenges. In the past decade, a persistence diagram (PD) from topological data analysis (TDA) has become a popular descriptor of shape of data with a well-defined distance between points. However, applications of TDA to graphs, where there is no intrinsic concept of distance between the nodes, remain largely unexplored. This paper addresses this gap in the literature by introducing a computationally efficient framework to extract shape information from graph data. Our framework has two main steps: first, we compute a PD using the so-called lower-star filtration which utilizes quantitative node attributes, and then vectorize it by averaging the associated Betti function over successive scale values on a one-dimensional grid. Our approach avoids embedding a graph into a metric space and has stability properties against input noise. In simulation studies, we show that the proposed vector summary leads to improved change point detection rate in time-varying graphs. In a real data application, our approach provides up to 22% gain in anomalous price prediction for the Ethereum cryptocurrency transaction networks.
Neural Lyapunov Control for Discrete-Time Systems
Authors: Junlin Wu, Andrew Clark, Yiannis Kantaros, Yevgeniy Vorobeychik
Subjects: Machine Learning (cs.LG); Systems and Control (eess.SY)
Abstract
While ensuring stability for linear systems is well understood, it remains a major challenge for systems with nonlinear dynamics. A general approach in such cases is to leverage Lyapunov stability theory to compute a combination of a Lyapunov control function and an associated control policy. However, finding Lyapunov functions for general nonlinear systems is a challenging task. To address this challenge, several methods have been recently proposed that represent Lyapunov functions using neural networks. However, such approaches have been designed exclusively for continuous-time systems. We propose the first approach for learning neural Lyapunov control in discrete-time systems. Three key ingredients enable us to effectively learn provably stable control policies. The first is a novel mixed-integer linear programming approach for verifying the stability conditions in discrete-time systems. The second is a novel approach for computing sub-level sets which characterize the region of attraction. Finally, we rely on a heuristic gradient-based approach for quickly finding counterexamples to significantly speed up Lyapunov function learning. Our experiments on four standard benchmarks demonstrate that our approach significantly outperforms state-of-the-art baselines. For example, on the path tracking benchmark, we outperform recent neural Lyapunov control baselines by an order of magnitude in both running time and the size of the region of attraction, and on two of the four benchmarks (cartpole and PVTOL), ours is the first automated approach to return a provably stable controller.
Long-Tailed Question Answering in an Open World
Authors: Yi Dai, Hao Lang, Yinhe Zheng, Fei Huang, Yongbin Li
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Abstract
Real-world data often have an open long-tailed distribution, and building a unified QA model supporting various tasks is vital for practical QA applications. However, it is non-trivial to extend previous QA approaches since they either require access to seen tasks of adequate samples or do not explicitly model samples from unseen tasks. In this paper, we define Open Long-Tailed QA (OLTQA) as learning from long-tailed distributed data and optimizing performance over seen and unseen QA tasks. We propose an OLTQA model that encourages knowledge sharing between head, tail and unseen tasks, and explicitly mines knowledge from a large pre-trained language model (LM). Specifically, we organize our model through a pool of fine-grained components and dynamically combine these components for an input to facilitate knowledge sharing. A retrieve-then-rerank frame is further introduced to select in-context examples, which guild the LM to generate text that express knowledge for QA tasks. Moreover, a two-stage training approach is introduced to pre-train the framework by knowledge distillation (KD) from the LM and then jointly train the frame and a QA model through an adaptive mutual KD method. On a large-scale OLTQA dataset we curate from 43 existing QA datasets, our model consistently outperforms the state-of-the-art. We release the code and data at \url{https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/oltqa}.
An Asynchronous Massive Access Scheme with Dynamic Range Considerations
Abstract
This paper studies the performance of a transmission and reception scheme for massive access under some practical challenges. One challenge is the near-far problem, i.e., an access point often receives signals from different transmitting devices at vastly different signal strengths. Another challenge is that the signals from different devices may be subject to arbitrary, analog, and heterogeneous delays. This paper considers a fully asynchronous model which is more realistic than the frame or symbol level synchrony needed in most existing work. A main theorem characterizes the asymptotic scaling of the codelength with the number of devices, a device delay upper bound, and the dynamic range of received signal strengths across devices. The scaling result suggests potential advantages of grouping devices with similar received signal strengths and letting the groups use time sharing. The performance of the proposed scheme is evaluated using simulations with and without grouping.
How Expressive are Spectral-Temporal Graph Neural Networks for Time Series Forecasting?
Authors: Ming Jin, Guangsi Shi, Yuan-Fang Li, Qingsong Wen, Bo Xiong, Tian Zhou, Shirui Pan
Abstract
Spectral-temporal graph neural network is a promising abstraction underlying most time series forecasting models that are based on graph neural networks (GNNs). However, more is needed to know about the underpinnings of this branch of methods. In this paper, we establish a theoretical framework that unravels the expressive power of spectral-temporal GNNs. Our results show that linear spectral-temporal GNNs are universal under mild assumptions, and their expressive power is bounded by our extended first-order Weisfeiler-Leman algorithm on discrete-time dynamic graphs. To make our findings useful in practice on valid instantiations, we discuss related constraints in detail and outline a theoretical blueprint for designing spatial and temporal modules in spectral domains. Building on these insights and to demonstrate how powerful spectral-temporal GNNs are based on our framework, we propose a simple instantiation named Temporal Graph GegenConv (TGC), which significantly outperforms most existing models with only linear components and shows better model efficiency.
AEWAE: An Efficient Ensemble Framework for Concept Drift Adaptation in IoT Data Stream
Authors: Yafeng Wu, Lan Liu, Yongjie Yu, Guiming Chen, Junhan Hu
Abstract
With the evolution of the fifth-generation (5G) wireless network, smart technology based on the Internet of Things (IoT) has become increasingly popular. As a crucial component of smart technology, IoT systems for service delivery often face concept drift issues in network data stream analytics due to dynamic IoT environments, resulting in performance degradation. In this article, we propose a drift-adaptive framework called Adaptive Exponentially Weighted Average Ensemble (AEWAE) consisting of three stages: IoT data preprocessing, base model learning, and online ensembling. It is a data stream analytics framework that integrates dynamic adjustments of ensemble methods to tackle various scenarios. Experimental results on two public IoT datasets demonstrate that our proposed framework outperforms state-of-the-art methods, achieving high accuracy and efficiency in IoT data stream analytics.
Optimal Algorithms for Bounded Weighted Edit Distance
Authors: Alejandro Cassis, Tomasz Kociumaka, Philip Wellnitz
Abstract
The edit distance of two strings is the minimum number of insertions, deletions, and substitutions of characters needed to transform one string into the other. The textbook dynamic-programming algorithm computes the edit distance of two length-$n$ strings in $O(n^2)$ time, which is optimal up to subpolynomial factors under SETH. An established way of circumventing this hardness is to consider the bounded setting, where the running time is parameterized by the edit distance $k$. A celebrated algorithm by Landau and Vishkin (JCSS '88) achieves time $O(n + k^2)$, which is optimal as a function of $n$ and $k$. Most practical applications rely on a more general weighted edit distance, where each edit has a weight depending on its type and the involved characters from the alphabet $\Sigma$. This is formalized through a weight function $w : \Sigma\cup{\varepsilon}\times\Sigma\cup{\varepsilon}\to\mathbb{R}$ normalized so that $w(a,a)=0$ and $w(a,b)\geq 1$ for all $a,b \in \Sigma\cup{\varepsilon}$ with $a \neq b$; the goal is to find an alignment of the two strings minimizing the total weight of edits. The $O(n^2)$-time algorithm supports this setting seamlessly, but only very recently, Das, Gilbert, Hajiaghayi, Kociumaka, and Saha (STOC '23) gave the first non-trivial algorithm for the bounded version, achieving time $O(n + k^5)$. While this running time is linear for $k\le n^{1/5}$, it is still very far from the bound $O(n+k^2)$ achievable in the unweighted setting. In this paper, we essentially close this gap by showing both an improved $\tilde O(n+\sqrt{nk^3})$-time algorithm and, more surprisingly, a matching lower bound: Conditioned on the All-Pairs Shortest Paths (APSP) hypothesis, our running time is optimal for $\sqrt{n}\le k\le n$ (up to subpolynomial factors). This is the first separation between the complexity of the weighted and unweighted edit distance problems.
The effect of linear dispersive errors on nonlinear time-stepping accuracy
Abstract
For simulations of time-evolution problems, such as weather and climate models, taking the largest stable time-step is advantageous for reducing the wall-clock time. We propose methods for studying the effect of linear dispersive errors on the time-stepping accuracy of nonlinear problems. We demonstrate an application of this to the Rotating Shallow Water Equations (RSWEs). To begin, a nonlinear time-stepping `triadic error' metric is constructed from three-wave interactions. Stability polynomials, obtained from the oscillatory Dahlquist test equation, enable the computation of triadic errors for different time-steppers; we compare five classical schemes. We next provide test cases comparing different time-step sizes within a numerical model. The first case is of a reforming Gaussian height perturbation. This contains a nonlinear phase shift that can be missed with a large time-step. The second set of test cases initialise individual waves to allow specific triads to form. The presence of a slow transition from linear to nonlinear dynamics creates a good venue for testing how the slow phase information is replicated with a large time-step. Three models, including the finite element code Gusto, and the MetOffice's new LFRic model, are examined in these test cases with different time-steppers.
Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond
Authors: Zhu Liu, Jinyuan Liu, Guanyao Wu, Long Ma, Xin Fan, Risheng Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recently, multi-modality scene perception tasks, e.g., image fusion and scene understanding, have attracted widespread attention for intelligent vision systems. However, early efforts always consider boosting a single task unilaterally and neglecting others, seldom investigating their underlying connections for joint promotion. To overcome these limitations, we establish the hierarchical dual tasks-driven deep model to bridge these tasks. Concretely, we firstly construct an image fusion module to fuse complementary characteristics and cascade dual task-related modules, including a discriminator for visual effects and a semantic network for feature measurement. We provide a bi-level perspective to formulate image fusion and follow-up downstream tasks. To incorporate distinct task-related responses for image fusion, we consider image fusion as a primary goal and dual modules as learnable constraints. Furthermore, we develop an efficient first-order approximation to compute corresponding gradients and present dynamic weighted aggregation to balance the gradients for fusion learning. Extensive experiments demonstrate the superiority of our method, which not only produces visually pleasant fused results but also realizes significant promotion for detection and segmentation than the state-of-the-art approaches.
Investigating the generative dynamics of energy-based neural networks
Authors: Lorenzo Tausani, Alberto Testolin, Marco Zorzi
Subjects: Neural and Evolutionary Computing (cs.NE); Machine Learning (cs.LG)
Abstract
Generative neural networks can produce data samples according to the statistical properties of their training distribution. This feature can be used to test modern computational neuroscience hypotheses suggesting that spontaneous brain activity is partially supported by top-down generative processing. A widely studied class of generative models is that of Restricted Boltzmann Machines (RBMs), which can be used as building blocks for unsupervised deep learning architectures. In this work, we systematically explore the generative dynamics of RBMs, characterizing the number of states visited during top-down sampling and investigating whether the heterogeneity of visited attractors could be increased by starting the generation process from biased hidden states. By considering an RBM trained on a classic dataset of handwritten digits, we show that the capacity to produce diverse data prototypes can be increased by initiating top-down sampling from chimera states, which encode high-level visual features of multiple digits. We also found that the model is not capable of transitioning between all possible digit states within a single generation trajectory, suggesting that the top-down dynamics is heavily constrained by the shape of the energy function.
Comparison of Clustering Algorithms for Statistical Features of Vibration Data Sets
Authors: Philipp Sepin, Jana Kemnitz, Safoura Rezapour Lakani, Daniel Schall
Abstract
Vibration-based condition monitoring systems are receiving increasing attention due to their ability to accurately identify different conditions by capturing dynamic features over a broad frequency range. However, there is little research on clustering approaches in vibration data and the resulting solutions are often optimized for a single data set. In this work, we present an extensive comparison of the clustering algorithms K-means clustering, OPTICS, and Gaussian mixture model clustering (GMM) applied to statistical features extracted from the time and frequency domains of vibration data sets. Furthermore, we investigate the influence of feature combinations, feature selection using principal component analysis (PCA), and the specified number of clusters on the performance of the clustering algorithms. We conducted this comparison in terms of a grid search using three different benchmark data sets. Our work showed that averaging (Mean, Median) and variance-based features (Standard Deviation, Interquartile Range) performed significantly better than shape-based features (Skewness, Kurtosis). In addition, K-means outperformed GMM slightly for these data sets, whereas OPTICS performed significantly worse. We were also able to show that feature combinations as well as PCA feature selection did not result in any significant performance improvements. With an increase in the specified number of clusters, clustering algorithms performed better, although there were some specific algorithmic restrictions.
A Data-Driven Approach to Lightweight DVFS-Aware Counter-Based Power Modeling for Heterogeneous Platforms
Authors: Sergio Mazzola, Thomas Benz, Björn Forsberg, Luca Benini
Abstract
Computing systems have shifted towards highly parallel and heterogeneous architectures to tackle the challenges imposed by limited power budgets. These architectures must be supported by novel power management paradigms addressing the increasing design size, parallelism, and heterogeneity while ensuring high accuracy and low overhead. In this work, we propose a systematic, automated, and architecture-agnostic approach to accurate and lightweight DVFS-aware statistical power modeling of the CPU and GPU sub-systems of a heterogeneous platform, driven by the sub-systems' local performance monitoring counters (PMCs). Counter selection is guided by a generally applicable statistical method that identifies the minimal subsets of counters robustly correlating to power dissipation. Based on the selected counters, we train a set of lightweight, linear models characterizing each sub-system over a range of frequencies. Such models compose a lookup-table-based system-level model that efficiently captures the non-linearity of power consumption, showing desirable responsiveness and decomposability. We validate the system-level model on real hardware by measuring the total energy consumption of an NVIDIA Jetson AGX Xavier platform over a set of benchmarks. The resulting average estimation error is 1.3%, with a maximum of 3.1%. Furthermore, the model shows a maximum evaluation runtime of 500 ns, thus implying a negligible impact on system utilization and applicability to online dynamic power management (DPM).
Utility-Maximizing Bidding Strategy for Data Consumers in Auction-based Federated Learning
Authors: Xiaoli Tang, Han Yu
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Science and Game Theory (cs.GT)
Abstract
Auction-based Federated Learning (AFL) has attracted extensive research interest due to its ability to motivate data owners to join FL through economic means. Existing works assume that only one data consumer and multiple data owners exist in an AFL marketplace (i.e., a monopoly market). Therefore, data owners bid to join the data consumer for FL. However, this assumption is not realistic in practical AFL marketplaces in which multiple data consumers can compete to attract data owners to join their respective FL tasks. In this paper, we bridge this gap by proposing a first-of-its-kind utility-maximizing bidding strategy for data consumers in federated learning (Fed-Bidder). It enables multiple FL data consumers to compete for data owners via AFL effectively and efficiently by providing with utility estimation capabilities which can accommodate diverse forms of winning functions, each reflecting different market dynamics. Extensive experiments based on six commonly adopted benchmark datasets show that Fed-Bidder is significantly more advantageous compared to four state-of-the-art approaches.
Quality Competition Among Internet Service Providers in a Path-Aware Internet
Authors: Simon Scherrer, Seyedali Tabaeiaghdaei, Adrian Perrig
Subjects: Networking and Internet Architecture (cs.NI)
Abstract
Internet service providers (ISPs) have a variety of quality attributes that determine their attractiveness for data transmission, ranging from quality-of-service metrics such as jitter to security properties such as the presence of DDoS defense systems. ISPs improve these attributes in line with their profit objective, i.e., up to the level that maximizes revenue from attracted traffic while minimizing attribute-related cost, all in the context of alternative offers by competing ISPs. In today's Internet, this quality competition mostly takes place between ISPs that are next-hop options for a given destination. In contrast, emerging path-aware networks enable end-points to select entire inter-domain forwarding paths, and thus intensify ISP competition. In this paper, we analyze how path-aware networking changes the competition dynamics in the Internet, and how path quality and ISP profits are affected as a result. To that end, we develop a game-theoretic model in which ISPs (i) affect path quality via multiple attributes that entail costs, (ii) constitute paths together with other selfish ISPs, and (iii) are in competition with alternative paths when attracting traffic. The model enables an extensive theoretical analysis, surprisingly showing that end-point path selection can have both positive and negative effects on path quality and ISP profits, depending on the network topology and the cost structure of ISPs. However, a large-scale simulation, which draws on real-world data to set model parameters, shows that the positive effects will likely prevail in practice: Compared to a single-path scenario, the prevalence of quality attributes increases by at least 50%, and 75% of ISPs improve their profit if the end-points can choose among 5 paths towards any destination.
Using a Bayesian-Inference Approach to Calibrating Models for Simulation in Robotics
Authors: Huzaifa Mustafa Unjhawala, Ruochun Zhang, Wei Hu, Jinlong Wu, Radu Serban, Dan Negrut
Abstract
In robotics, simulation has the potential to reduce design time and costs, and lead to a more robust engineered solution and a safer development process. However, the use of simulators is predicated on the availability of good models. This contribution is concerned with improving the quality of these models via calibration, which is cast herein in a Bayesian framework. First, we discuss the Bayesian machinery involved in model calibration. Then, we demonstrate it in one example: calibration of a vehicle dynamics model that has low degree of freedom count and can be used for state estimation, model predictive control, or path planning. A high fidelity simulator is used to emulate the ``experiments'' and generate the data for the calibration. The merit of this work is not tied to a new Bayesian methodology for calibration, but to the demonstration of how the Bayesian machinery can establish connections among models in computational dynamics, even when the data in use is noisy. The software used to generate the results reported herein is available in a public repository for unfettered use and distribution.
REMaQE -- Reverse Engineering Math Equations from Executables
Abstract
Cybersecurity attacks against industrial control systems and cyber-physical systems can cause catastrophic real-world damage by infecting device binaries with malware. Mitigating such attacks can benefit from reverse engineering tools that recover sufficient semantic knowledge in terms of mathematical operations in the code. Conventional reverse engineering tools can decompile binaries to low-level code, but offer little semantic insight. This paper proposes REMaQE, an automated framework for reverse engineering of math equations from binary executables. REMaQE uses symbolic execution for dynamic analysis of the binary to extract the relevant semantic knowledge of the implemented algorithms. REMaQE provides an automatic parameter analysis pass which also leverages symbolic execution to identify input, output, and constant parameters of the implemented math equations. REMaQE automatically handles parameters accessed via registers, the stack, global memory, or pointers, and supports reverse engineering of object-oriented implementations such as C++ classes. REMaQE uses an algebraic simplification method which allows it to scale to complex conditional equations with ease. These features make REMaQE stand out over existing reverse engineering approaches for math equations. On a dataset of randomly generated math equations compiled to binaries from C and Simulink implementations, REMaQE accurately recovers a semantically matching equation for 97.53% of the models. For complex equations with more operations, accuracy stays consistently over 94%. REMaQE executes in 0.25 seconds on average and in 1.3 seconds for more complex equations. This real-time execution speed enables a smooth integration in an interactive mathematics-oriented reverse engineering workflow.
Abstract
Identifying the underlying dynamics of physical systems can be challenging when only provided with observational data. In this work, we consider systems that can be modelled as first-order ordinary differential equations. By assuming a certain pseudo-Hamiltonian formulation, we are able to learn the analytic terms of internal dynamics even if the model is trained on data where the system is affected by unknown damping and external disturbances. In cases where it is difficult to find analytic terms for the disturbances, a hybrid model that uses a neural network to learn these can still accurately identify the dynamics of the system as if under ideal conditions. This makes the models applicable in situations where other system identification models fail. Furthermore, we propose to use a fourth-order symmetric integration scheme in the loss function and avoid actual integration in the training, and demonstrate on varied examples how this leads to increased performance on noisy data.
Subword Segmental Machine Translation: Unifying Segmentation and Target Sentence Generation
Abstract
Subword segmenters like BPE operate as a preprocessing step in neural machine translation and other (conditional) language models. They are applied to datasets before training, so translation or text generation quality relies on the quality of segmentations. We propose a departure from this paradigm, called subword segmental machine translation (SSMT). SSMT unifies subword segmentation and MT in a single trainable model. It learns to segment target sentence words while jointly learning to generate target sentences. To use SSMT during inference we propose dynamic decoding, a text generation algorithm that adapts segmentations as it generates translations. Experiments across 6 translation directions show that SSMT improves chrF scores for morphologically rich agglutinative languages. Gains are strongest in the very low-resource scenario. SSMT also learns subwords that are closer to morphemes compared to baselines and proves more robust on a test set constructed for evaluating morphological compositional generalisation.
Keyword: efficient
Exploring the Landscape of Machine Unlearning: A Survey and Taxonomy
Efficient Training of Multi-task Neural Solver with Multi-armed Bandits
ACTC: Active Threshold Calibration for Cold-Start Knowledge Graph Completion
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM
Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning
A Generalizable Physics-informed Learning Framework for Risk Probability Estimation
Multi-agent Reinforcement Learning: Asynchronous Communication and Linear Function Approximation
Perpetual Humanoid Control for Real-time Simulated Avatars
SENDD: Sparse Efficient Neural Depth and Deformation for Tissue Tracking
Towards L-System Captioning for Tree Reconstruction
Treasure What You Have: Exploiting Similarity in Deep Neural Networks for Efficient Video Processing
State Constrained Stochastic Optimal Control for Continuous and Hybrid Dynamical Systems Using DFBSDE
A fast topological approach for predicting anomalies in time-varying graphs
Can SAM Boost Video Super-Resolution?
Probabilistic Group Testing in Distributed Computing with Attacked Workers
A Semi-Automated Hybrid Schema Matching Framework for Vegetation Data Integration
Patch-wise Mixed-Precision Quantization of Vision Transformer
Exploiting Fine-Grained DCT Representations for Hiding Image-Level Messages within JPEG Images
Active Learning in the Predict-then-Optimize Framework: A Margin-Based Approach
Robust stability of moving horizon estimation for continuous-time systems
PVT-SSD: Single-Stage 3D Object Detector with Point-Voxel Transformer
Joint Identification and Sensing for Discrete Memoryless Channels
Adaptive Privacy-Preserving Coded Computing With Hierarchical Task Partitioning
On practical robust reinforcement learning: adjacent uncertainty set and double-agent algorithm
On the convergence of the MLE as an estimator of the learning rate in the Exp3 algorithm
INGENIOUS: Using Informative Data Subsets for Efficient Pre-Training of Large Language Models
Differentiable Programming: Efficient Smoothing of Control-Flow-Induced Discontinuities
NUBO: A Transparent Python Package for Bayesian Optimisation
Null-text Guidance in Diffusion Models is Secretly a Cartoon-style Creator
Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond
IVP-VAE: Modeling EHR Time Series with Initial Value Problem Solvers
Simplification of General Mixed Boolean-Arithmetic Expressions: GAMBA
Traceability and Reuse Mechanisms, the most important Properties of Model Transformation Languages
A Data-Driven Approach to Lightweight DVFS-Aware Counter-Based Power Modeling for Heterogeneous Platforms
Utility-Maximizing Bidding Strategy for Data Consumers in Auction-based Federated Learning
Information Design in Multi-Agent Reinforcement Learning
DeepSTEP -- Deep Learning-Based Spatio-Temporal End-To-End Perception for Autonomous Vehicles
Constant-depth circuits vs. monotone circuits
Multigrid preconditioning of singularly perturbed convection-diffusion equations
A Generic Approach to Integrating Time into Spatial-Temporal Forecasting via Conditional Neural Fields
Emotion Recognition for Challenged People Facial Appearance in Social using Neural Network
Detection and Classification of Pole-like Landmarks for Domain-invariant 3D Point Cloud Map Matching
Enhancing Datalog Reasoning with Hypertree Decompositions
IUST_NLP at SemEval-2023 Task 10: Explainable Detecting Sexism with Transformers and Task-adaptive Pretraining
An Imitation Learning Based Algorithm Enabling Priori Knowledge Transfer in Modern Electricity Markets for Bayesian Nash Equilibrium Estimation
GPU-initiated Fine-grained Overlap of Collective Communication with Computation
Watch This Space: Securing Satellite Communication through Resilient Transmitter Fingerprinting
Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers
Real-Time Joint Simulation of LiDAR Perception and Motion Planning for Automated Driving
Meta-hallucinator: Towards Few-Shot Cross-Modality Cardiac Image Segmentation
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Self-Chained Image-Language Model for Video Localization and Question Answering
Fair Price Discrimination
SparseGNV: Generating Novel Views of Indoor Scenes with Sparse Input Views
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
Keyword: faster
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
Autonomous GIS: the next-generation AI-powered GIS
Patch-wise Mixed-Precision Quantization of Vision Transformer
PerFedRec++: Enhancing Personalized Federated Recommendation with Self-Supervised Pre-Training
Integer points in the degree-sequence polytope
IVP-VAE: Modeling EHR Time Series with Initial Value Problem Solvers
Constant-depth circuits vs. monotone circuits
Enhancing Datalog Reasoning with Hypertree Decompositions
Adaptive Graduated Nonconvexity Loss
CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model
Decentralization and Acceleration Enables Large-Scale Bundle Adjustment
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
Keyword: mobile
PriGen: Towards Automated Translation of Android Applications' Code to Privacy Captions
Full-Spectrum Wireless Communications for 6G and Beyond: From Microwave, Millimeter-Wave, Terahertz to Lightwave
Multi-Tier Client Selection for Mobile Federated Learning Networks
The NetMob23 Dataset: A High-resolution Multi-region Service-level Mobile Data Traffic Cartography
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
Keyword: pruning
Securing Distributed SGD against Gradient Leakage Threats
SMART: Self-Morphing Anytime Replanning Tree
Keyword: voxel
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
PVT-SSD: Single-Stage 3D Object Detector with Point-Voxel Transformer
Keyword: lidar
DeepSTEP -- Deep Learning-Based Spatio-Temporal End-To-End Perception for Autonomous Vehicles
Adaptive Graduated Nonconvexity Loss
Rhino: An Autonomous Robot for Mapping Underground Mine Environments
Real-Time Joint Simulation of LiDAR Perception and Motion Planning for Automated Driving
Keyword: diffusion
Analyzing Bias in Diffusion-based Face Generation Models
Undercover Deepfakes: Detecting Fake Segments in Videos
Null-text Guidance in Diffusion Models is Secretly a Cartoon-style Creator
Multigrid preconditioning of singularly perturbed convection-diffusion equations
CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model
Exploiting Diffusion Prior for Real-World Image Super-Resolution
Keyword: dynamic
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
Self-contained relaxation-based dynamical Ising machines
Planning a Community Approach to Diabetes Care in Low- and Middle-Income Countries Using Optimization
Dynamic Graph Representation Learning for Depression Screening with Transformer
Continual Facial Expression Recognition: A Benchmark
Perpetual Humanoid Control for Real-time Simulated Avatars
Adaptive Molecular Communication Receivers with Tunable Ligand-Receptor Interactions
SMART: Self-Morphing Anytime Replanning Tree
State Constrained Stochastic Optimal Control for Continuous and Hybrid Dynamical Systems Using DFBSDE
A fast topological approach for predicting anomalies in time-varying graphs
Neural Lyapunov Control for Discrete-Time Systems
Long-Tailed Question Answering in an Open World
An Asynchronous Massive Access Scheme with Dynamic Range Considerations
How Expressive are Spectral-Temporal Graph Neural Networks for Time Series Forecasting?
AEWAE: An Efficient Ensemble Framework for Concept Drift Adaptation in IoT Data Stream
Optimal Algorithms for Bounded Weighted Edit Distance
The effect of linear dispersive errors on nonlinear time-stepping accuracy
Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond
Investigating the generative dynamics of energy-based neural networks
Comparison of Clustering Algorithms for Statistical Features of Vibration Data Sets
A Data-Driven Approach to Lightweight DVFS-Aware Counter-Based Power Modeling for Heterogeneous Platforms
Utility-Maximizing Bidding Strategy for Data Consumers in Auction-based Federated Learning
Quality Competition Among Internet Service Providers in a Path-Aware Internet
Using a Bayesian-Inference Approach to Calibrating Models for Simulation in Robotics
REMaQE -- Reverse Engineering Math Equations from Executables
Pseudo-Hamiltonian system identification
Subword Segmental Machine Translation: Unifying Segmentation and Target Sentence Generation