Abstract
Machine learning models are often provisioned as a cloud-based service where the clients send their data to the service provider to obtain the result. This setting is commonplace due to the high value of the models, but it requires the clients to forfeit the privacy that the query data may contain. Homomorphic encryption (HE) is a promising technique to address this adversity. With HE, the service provider can take encrypted data as a query and run the model without decrypting it. The result remains encrypted, and only the client can decrypt it. All these benefits come at the cost of computational cost because HE turns simple floating-point arithmetic into the computation between long (degree over 1024) polynomials. Previous work has proposed to tailor deep neural networks for efficient computation over encrypted data, but already high computational cost is again amplified by HE, hindering performance improvement. In this paper we show hyperdimensional computing can be a rescue for privacy-preserving machine learning over encrypted data. We find that the advantage of hyperdimensional computing in performance is amplified when working with HE. This observation led us to design HE-HDC, a machine-learning inference system that uses hyperdimensional computing with HE. We carefully structure the machine learning service so that the server will perform only the HE-friendly computation. Moreover, we adapt the computation and HE parameters to expedite computation while preserving accuracy and security. Our experimental result based on real measurements shows that HE-HDC outperforms existing systems by 26~3000 times with comparable classification accuracy.
Malware Classification using Deep Neural Networks: Performance Evaluation and Applications in Edge Devices
Authors: Akhil M R, Adithya Krishna V Sharma, Harivardhan Swamy, Pavan A, Ashray Shetty, Anirudh B Sathyanarayana
Subjects: Cryptography and Security (cs.CR); Machine Learning (cs.LG)
Abstract
With the increasing extent of malware attacks in the present day along with the difficulty in detecting modern malware, it is necessary to evaluate the effectiveness and performance of Deep Neural Networks (DNNs) for malware classification. Multiple DNN architectures can be designed and trained to detect and classify malware binaries. Results demonstrate the potential of DNNs in accurately classifying malware with high accuracy rates observed across different malware types. Additionally, the feasibility of deploying these DNN models on edge devices to enable real-time classification, particularly in resource-constrained scenarios proves to be integral to large IoT systems. By optimizing model architectures and leveraging edge computing capabilities, the proposed methodologies achieve efficient performance even with limited resources. This study contributes to advancing malware detection techniques and emphasizes the significance of integrating cybersecurity measures for the early detection of malware and further preventing the adverse effects caused by such attacks. Optimal considerations regarding the distribution of security tasks to edge devices are addressed to ensure that the integrity and availability of large scale IoT systems are not compromised due to malware attacks, advocating for a more resilient and secure digital ecosystem.
Performance Analysis of Various EfficientNet Based U-Net++ Architecture for Automatic Building Extraction from High Resolution Satellite Images
Abstract
Building extraction is an essential component of study in the science of remote sensing, and applications for building extraction heavily rely on semantic segmentation of high-resolution remote sensing imagery. Semantic information extraction gap constraints in the present deep learning based approaches, however can result in inadequate segmentation outcomes. To address this issue and extract buildings with high accuracy, various efficientNet backbone based U-Net++ has been proposed in this study. The designed network, based on U-Net, can improve the sensitivity of the model by deep supervision, voluminous redesigned skip-connections and hence reducing the influence of irrelevant feature areas in the background. Various effecientNet backbone based encoders have been employed when training the network to enhance the capacity of the model to extract more relevant feature. According on the experimental findings, the suggested model significantly outperforms previous cutting-edge approaches. Among the 5 efficientNet variation Unet++ based on efficientb4 achieved the best result by scoring mean accuracy of 92.23%, mean iou of 88.32%, and mean precision of 93.2% on publicly available Massachusetts building dataset and thus showing the promises of the model for automatic building extraction from high resolution satellite images.
DeepTriNet: A Tri-Level Attention Based DeepLabv3+ Architecture for Semantic Segmentation of Satellite Images
Authors: Tareque Bashar Ovi, Shakil Mosharrof, Nomaiya Bashree, Md Shofiqul Islam, Muhammad Nazrul Islam
Abstract
The segmentation of satellite images is crucial in remote sensing applications. Existing methods face challenges in recognizing small-scale objects in satellite images for semantic segmentation primarily due to ignoring the low-level characteristics of the underlying network and due to containing distinct amounts of information by different feature maps. Thus, in this research, a tri-level attention-based DeepLabv3+ architecture (DeepTriNet) is proposed for the semantic segmentation of satellite images. The proposed hybrid method combines squeeze-and-excitation networks (SENets) and tri-level attention units (TAUs) with the vanilla DeepLabv3+ architecture, where the TAUs are used to bridge the semantic feature gap among encoders output and the SENets used to put more weight on relevant features. The proposed DeepTriNet finds which features are the more relevant and more generalized way by its self-supervision rather we annotate them. The study showed that the proposed DeepTriNet performs better than many conventional techniques with an accuracy of 98% and 77%, IoU 80% and 58%, precision 88% and 68%, and recall of 79% and 55% on the 4-class Land-Cover.ai dataset and the 15-class GID-2 dataset respectively. The proposed method will greatly contribute to natural resource management and change detection in rural and urban regions through efficient and semantic satellite image segmentation
Open SYCL on heterogeneous GPU systems: A case of study
Authors: Rocío Carratalá-Sáez, Francisco J. andújar, Yuri Torres, Arturo Gonzalez-Escribano, Diego R. Llanos
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Abstract
Computational platforms for high-performance scientific applications are becoming more heterogenous, including hardware accelerators such as multiple GPUs. Applications in a wide variety of scientific fields require an efficient and careful management of the computational resources of this type of hardware to obtain the best possible performance. However, there are currently different GPU vendors, architectures and families that can be found in heterogeneous clusters or machines. Programming with the vendor provided languages or frameworks, and optimizing for specific devices, may become cumbersome and compromise portability to other systems. To overcome this problem, several proposals for high-level heterogeneous programming have appeared, trying to reduce the development effort and increase functional and performance portability, specifically when using GPU hardware accelerators. This paper evaluates the SYCL programming model, using the Open SYCL compiler, from two different perspectives: The performance it offers when dealing with single or multiple GPU devices from the same or different vendors, and the development effort required to implement the code. We use as case of study the Finite Time Lyapunov Exponent calculation over two real-world scenarios and compare the performance and the development effort of its Open SYCL-based version against the equivalent versions that use CUDA or HIP. Based on the experimental results, we observe that the use of SYCL does not lead to a remarkable overhead in terms of the GPU kernels execution time. In general terms, the Open SYCL development effort for the host code is lower than that observed with CUDA or HIP. Moreover, the SYCL version can take advantage of both CUDA and AMD GPU devices simultaneously much easier than directly using the vendor-specific programming solutions.
Flood and Echo: Algorithmic Alignment of GNNs with Distributed Computing
Authors: Joël Mathys, Florian Grötschl, Kalyan Varma Nadimpalli, Roger Wattenhofer
Abstract
Graph Neural Networks are a natural fit for learning algorithms. They can directly represent tasks through an abstract but versatile graph structure and handle inputs of different sizes. This opens up the possibility for scaling and extrapolation to larger graphs, one of the most important advantages of an algorithm. However, this raises two core questions i) How can we enable nodes to gather the required information in a given graph ($\textit{information exchange}$), even if is far away and ii) How can we design an execution framework which enables this information exchange for extrapolation to larger graph sizes ($\textit{algorithmic alignment for extrapolation}$). We propose a new execution framework that is inspired by the design principles of distributed algorithms: Flood and Echo Net. It propagates messages through the entire graph in a wave like activation pattern, which naturally generalizes to larger instances. Through its sparse but parallel activations it is provably more efficient in terms of message complexity. We study the proposed model and provide both empirical evidence and theoretical insights in terms of its expressiveness, efficiency, information exchange and ability to extrapolate.
Efficient Path Planning in Large Unknown Environments with Switchable System Models for Automated Vehicles
Authors: Oliver Schumann, Michael Buchholz, Klaus Dietmayer
Subjects: Robotics (cs.RO); Systems and Control (eess.SY)
Abstract
Large environments are challenging for path planning algorithms as the size of the configuration space increases. Furthermore, if the environment is mainly unexplored, large amounts of the path are planned through unknown areas. Hence, a complete replanning of the entire path occurs whenever the path collides with newly discovered obstacles. We propose a novel method that stops the path planning algorithm after a certain distance. It is used to navigate the algorithm in large environments and is not prone to problems of existing navigation approaches. Furthermore, we developed a method to detect significant environment changes to allow a more efficient replanning. At last, we extend the path planner to be used in the U-Shift concept vehicle. It can switch to another system model and rotate around the center of its rear axis. The results show that the proposed methods generate nearly identical paths compared to the standard Hybrid A* while drastically reducing the execution time. Furthermore, we show that the extended path planning algorithm enables the efficient use of the maneuvering capabilities of the concept vehicle to plan concise paths in narrow environments.
Ultima: Robust and Tail-Optimal AllReduce for Distributed Deep Learning in the Cloud
Abstract
We present Ultima, a new collective-communication system for the cloud with bounded, predictable completion times for deep-learning jobs in the presence of varying computation (stragglers) and communication (congestion and gradient drops) variabilities. Ultima exploits the inherent resiliency and the stochastic nature of distributed deep-learning (DDL) training to work with approximated gradients, and provides an efficient balance between (tail) performance and the resulting accuracy of the trained models. Exploiting this domain-specific characteristic of DDL, Ultima introduces (1) mechanisms (e.g., Transpose AllReduce, unreliable connection-oriented transport, and adaptive timeout) to improve the DDL jobs' tail execution time, and (2) strategies (e.g., Hadamard Transform) to mitigate the impact of gradient drops on model accuracy. Our evaluation shows that Ultima achieves 60% faster time-to-accuracy (TTA), on average, when operating in shared environments (e.g., public cloud), and is on par with existing algorithms (e.g., Ring-AllReduce) in dedicated environments (like HPC).
CarDS-Plus ECG Platform: Development and Feasibility Evaluation of a Multiplatform Artificial Intelligence Toolkit for Portable and Wearable Device Electrocardiograms
Authors: Sumukh Vasisht Shankar, Evangelos K Oikonomou, Rohan Khera
Subjects: Machine Learning (cs.LG); Signal Processing (eess.SP)
Abstract
In the rapidly evolving landscape of modern healthcare, the integration of wearable & portable technology provides a unique opportunity for personalized health monitoring in the community. Devices like the Apple Watch, FitBit, and AliveCor KardiaMobile have revolutionized the acquisition and processing of intricate health data streams. Amidst the variety of data collected by these gadgets, single-lead electrocardiogram (ECG) recordings have emerged as a crucial source of information for monitoring cardiovascular health. There has been significant advances in artificial intelligence capable of interpreting these 1-lead ECGs, facilitating clinical diagnosis as well as the detection of rare cardiac disorders. This design study describes the development of an innovative multiplatform system aimed at the rapid deployment of AI-based ECG solutions for clinical investigation & care delivery. The study examines design considerations, aligning them with specific applications, develops data flows to maximize efficiency for research & clinical use. This process encompasses the reception of single-lead ECGs from diverse wearable devices, channeling this data into a centralized data lake & facilitating real-time inference through AI models for ECG interpretation. An evaluation of the platform demonstrates a mean duration from acquisition to reporting of results of 33.0 to 35.7 seconds, after a standard 30 second acquisition. There were no substantial differences in acquisition to reporting across two commercially available devices (Apple Watch and KardiaMobile). These results demonstrate the succcessful translation of design principles into a fully integrated & efficient strategy for leveraging 1-lead ECGs across platforms & interpretation by AI-ECG algorithms. Such a platform is critical to translating AI discoveries for wearable and portable ECG devices to clinical impact through rapid deployment.
Neural Relational Inference with Fast Modular Meta-learning
Abstract
\textit{Graph neural networks} (GNNs) are effective models for many dynamical systems consisting of entities and relations. Although most GNN applications assume a single type of entity and relation, many situations involve multiple types of interactions. \textit{Relational inference} is the problem of inferring these interactions and learning the dynamics from observational data. We frame relational inference as a \textit{modular meta-learning} problem, where neural modules are trained to be composed in different ways to solve many tasks. This meta-learning framework allows us to implicitly encode time invariance and infer relations in context of one another rather than independently, which increases inference capacity. Framing inference as the inner-loop optimization of meta-learning leads to a model-based approach that is more data-efficient and capable of estimating the state of entities that we do not observe directly, but whose existence can be inferred from their effect on observed entities. To address the large search space of graph neural network compositions, we meta-learn a \textit{proposal function} that speeds up the inner-loop simulated annealing search within the modular meta-learning algorithm, providing two orders of magnitude increase in the size of problems that can be addressed.
A predict-and-optimize approach to profit-driven churn prevention
Abstract
In this paper, we introduce a novel predict-and-optimize method for profit-driven churn prevention. We frame the task of targeting customers for a retention campaign as a regret minimization problem. The main objective is to leverage individual customer lifetime values (CLVs) to ensure that only the most valuable customers are targeted. In contrast, many profit-driven strategies focus on churn probabilities while considering average CLVs. This often results in significant information loss due to data aggregation. Our proposed model aligns with the guidelines of Predict-and-Optimize (PnO) frameworks and can be efficiently solved using stochastic gradient descent methods. Results from 12 churn prediction datasets underscore the effectiveness of our approach, which achieves the best average performance compared to other well-established strategies in terms of average profit.
An efficient saddle search method for ordered phase transitions involving translational invariance
Abstract
The bottleneck of studying phase transitions is the barrier-crossing process composed of escaping from the basin of the local minimum and finding the saddle point. Breaking the bottleneck requires designing efficient algorithms relevant to the properties of concrete phase transition. In this work, we propose an efficient nullspace-preserving saddle search (NPSS) method for a class of phase transitions involving translational invariance. These critical states in these phase transitions are usually degenerate. The NPSS overcomes the difficulty of degeneration by ensuring the ascent direction orthogonal to the kernel space of the initial minimum, then efficiently escapes from the basin and finds the saddle point. We apply the NPSS method to the phase transitions between crystals, and between crystal and quasicrystal, based on the Landau-Brazvoskii and Lifshitz-Petrich free energy functionals. Numerical results show a good performance of the proposed method. Finally, we investigate an important property of the inflection point, where symmetry-breaking begins to occur and nullspace is no longer maintained.
The impact when neural min-sum variants meet ordered statistics decoding of LDPC codes
Abstract
The decoding performance of conventional belief propagation decoders is seriously confined by the existence of message dependence in the code structure for short or moderate LDPC codes. In spite of the similarity of the external performance, we found the corresponding decoding failures of varied decoders, symbolized by the cross-entropy metric, will leave differed room for improvement for the postprocessing of ordered statistical decoding. Bearing in mind the postprocessor of higher order ensures better performance and incurs more expensive complexity, we propose a dynamic assignment of searching scope with respect to each decoding pattern for the order statistical decoding. Furthermore, the segmentation of decoding patterns, determined on the fly by the number of swaps in reducing the code check matrix into its systematic form via Gaussian elimination operation. will also benefit reducing complexity. Compared with the existing methods, our adapted strategy is justified by saving most memory consumption and inefficient searching of code-word candidates in extensive simulation especially for longer codes, at the cost of marginal performance loss.
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources
Abstract
Large Language Models (LLMs) have showcased remarkable impacts across a wide spectrum of natural language processing tasks. Fine-tuning these pre-trained models on downstream datasets provides further significant performance gains, but this process has been challenging due to its extraordinary resource requirements. To this end, existing efforts focus on parameter-efficient fine-tuning, which, unfortunately, fail to capitalize on the powerful potential of full-parameter fine-tuning. In this work, we propose QFT, a novel Quantized Full-parameter Tuning framework for LLMs that enables memory-efficient fine-tuning without harming performance. Our framework incorporates two novel ideas: (i) we adopt the efficient Lion optimizer, which only keeps track of the momentum and has consistent update magnitudes for each parameter, an inherent advantage for robust quantization; and (ii) we quantize all model states and store them as integer values, and present a gradient flow and parameter update scheme for the quantized weights. As a result, QFT reduces the model state memory to 21% of the standard solution while achieving comparable performance, e.g., tuning a LLaMA-7B model requires only <30GB of memory, satisfied by a single A6000 GPU.
Operating-Envelopes-Aware Decentralized Welfare Maximization for Energy Communities
Authors: Ahmed S. Alahmed, Guido Cavraro, Andrey Bernstein, Lang Tong
Subjects: Systems and Control (eess.SY); Theoretical Economics (econ.TH); Optimization and Control (math.OC)
Abstract
We propose an operating-envelope-aware, prosumer-centric, and efficient energy community that aggregates individual and shared community distributed energy resources and transacts with a regulated distribution system operator (DSO) under a generalized net energy metering tariff design. To ensure safe network operation, the DSO imposes dynamic export and import limits, known as dynamic operating envelopes, on end-users' revenue meters. Given the operating envelopes, we propose an incentive-aligned community pricing mechanism under which the decentralized optimization of community members' benefit implies the optimization of overall community welfare. The proposed pricing mechanism satisfies the cost-causation principle and ensures the stability of the energy community in a coalition game setting. Numerical examples provide insights into the characteristics of the proposed pricing mechanism and quantitative measures of its performance.
$pκ$-Curves: Interpolatory curves with curvature approximating a parabola
Abstract
This paper introduces a novel class of fair and interpolatory curves called $p\kappa$-curves. These curves are comprised of smoothly stitched B\'ezier curve segments, where the curvature distribution of each segment is made to closely resemble a parabola, resulting in an aesthetically pleasing shape. Moreover, each segment passes through an interpolated point at a parameter where the parabola has an extremum, encouraging the alignment of interpolated points with curvature extrema. To achieve these properties, we tailor an energy function that guides the optimization process to obtain the desired curve characteristics. Additionally, we develop an efficient algorithm and an initialization method, enabling interactive modeling of the $p\kappa$-curves without the need for global optimization. We provide various examples and comparisons with existing state-of-the-art methods to demonstrate the curve modeling capabilities and visually pleasing appearance of $p\kappa$-curves.
DeepSimHO: Stable Pose Estimation for Hand-Object Interaction via Physics Simulation
Authors: Rong Wang, Wei Mao, Hongdong Li
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
This paper addresses the task of 3D pose estimation for a hand interacting with an object from a single image observation. When modeling hand-object interaction, previous works mainly exploit proximity cues, while overlooking the dynamical nature that the hand must stably grasp the object to counteract gravity and thus preventing the object from slipping or falling. These works fail to leverage dynamical constraints in the estimation and consequently often produce unstable results. Meanwhile, refining unstable configurations with physics-based reasoning remains challenging, both by the complexity of contact dynamics and by the lack of effective and efficient physics inference in the data-driven learning framework. To address both issues, we present DeepSimHO: a novel deep-learning pipeline that combines forward physics simulation and backward gradient approximation with a neural network. Specifically, for an initial hand-object pose estimated by a base network, we forward it to a physics simulator to evaluate its stability. However, due to non-smooth contact geometry and penetration, existing differentiable simulators can not provide reliable state gradient. To remedy this, we further introduce a deep network to learn the stability evaluation process from the simulator, while smoothly approximating its gradient and thus enabling effective back-propagation. Extensive experiments show that our method noticeably improves the stability of the estimation and achieves superior efficiency over test-time optimization. The code is available at https://github.com/rongakowang/DeepSimHO.
Multi-Task Learning-Enabled Automatic Vessel Draft Reading for Intelligent Maritime Surveillance
Abstract
The accurate and efficient vessel draft reading (VDR) is an important component of intelligent maritime surveillance, which could be exploited to assist in judging whether the vessel is normally loaded or overloaded. The computer vision technique with an excellent price-to-performance ratio has become a popular medium to estimate vessel draft depth. However, the traditional estimation methods easily suffer from several limitations, such as sensitivity to low-quality images, high computational cost, etc. In this work, we propose a multi-task learning-enabled computational method (termed MTL-VDR) for generating highly reliable VDR. In particular, our MTL-VDR mainly consists of four components, i.e., draft mark detection, draft scale recognition, vessel/water segmentation, and final draft depth estimation. We first construct a benchmark dataset related to draft mark detection and employ a powerful and efficient convolutional neural network to accurately perform the detection task. The multi-task learning method is then proposed for simultaneous draft scale recognition and vessel/water segmentation. To obtain more robust VDR under complex conditions (e.g., damaged and stained scales, etc.), the accurate draft scales are generated by an automatic correction method, which is presented based on the spatial distribution rules of draft scales. Finally, an adaptive computational method is exploited to yield an accurate and robust draft depth. Extensive experiments have been implemented on the realistic dataset to compare our MTL-VDR with state-of-the-art methods. The results have demonstrated its superior performance in terms of accuracy, robustness, and efficiency. The computational speed exceeds 40 FPS, which satisfies the requirements of real-time maritime surveillance to guarantee vessel traffic safety.
Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes
Abstract
Learning the distribution of data on Riemannian manifolds is crucial for modeling data from non-Euclidean space, which is required by many applications from diverse scientific fields. Yet, existing generative models on manifolds suffer from expensive divergence computation or rely on approximations of heat kernel. These limitations restrict their applicability to simple geometries and hinder scalability to high dimensions. In this work, we introduce the Riemannian Diffusion Mixture, a principled framework for building a generative process on manifolds as a mixture of endpoint-conditioned diffusion processes instead of relying on the denoising approach of previous diffusion models, for which the generative process is characterized by its drift guiding toward the most probable endpoint with respect to the geometry of the manifold. We further propose a simple yet efficient training objective for learning the mixture process, that is readily applicable to general manifolds. Our method outperforms previous generative models on various manifolds while scaling to high dimensions and requires a dramatically reduced number of in-training simulation steps for general manifolds.
Enhancing Neural Architecture Search with Multiple Hardware Constraints for Deep Learning Model Deployment on Tiny IoT Devices
Abstract
The rapid proliferation of computing domains relying on Internet of Things (IoT) devices has created a pressing need for efficient and accurate deep-learning (DL) models that can run on low-power devices. However, traditional DL models tend to be too complex and computationally intensive for typical IoT end-nodes. To address this challenge, Neural Architecture Search (NAS) has emerged as a popular design automation technique for co-optimizing the accuracy and complexity of deep neural networks. Nevertheless, existing NAS techniques require many iterations to produce a network that adheres to specific hardware constraints, such as the maximum memory available on the hardware or the maximum latency allowed by the target application. In this work, we propose a novel approach to incorporate multiple constraints into so-called Differentiable NAS optimization methods, which allows the generation, in a single shot, of a model that respects user-defined constraints on both memory and latency in a time comparable to a single standard training. The proposed approach is evaluated on five IoT-relevant benchmarks, including the MLPerf Tiny suite and Tiny ImageNet, demonstrating that, with a single search, it is possible to reduce memory and latency by 87.4% and 54.2%, respectively (as defined by our targets), while ensuring non-inferior accuracy on state-of-the-art hand-tuned deep neural networks for TinyML.
Are GATs Out of Balance?
Authors: Nimrah Mustafa, Aleksandar Bojchevski, Rebekka Burkholz
Abstract
While the expressive power and computational capabilities of graph neural networks (GNNs) have been theoretically studied, their optimization and learning dynamics, in general, remain largely unexplored. Our study undertakes the Graph Attention Network (GAT), a popular GNN architecture in which a node's neighborhood aggregation is weighted by parameterized attention coefficients. We derive a conservation law of GAT gradient flow dynamics, which explains why a high portion of parameters in GATs with standard initialization struggle to change during training. This effect is amplified in deeper GATs, which perform significantly worse than their shallow counterparts. To alleviate this problem, we devise an initialization scheme that balances the GAT network. Our approach i) allows more effective propagation of gradients and in turn enables trainability of deeper networks, and ii) attains a considerable speedup in training and convergence time in comparison to the standard initialization. Our main theorem serves as a stepping stone to studying the learning dynamics of positive homogeneous models with attention mechanisms.
AdaMesh: Personalized Facial Expressions and Head Poses for Speech-Driven 3D Facial Animation
Abstract
Speech-driven 3D facial animation aims at generating facial movements that are synchronized with the driving speech, which has been widely explored recently. Existing works mostly neglect the person-specific talking style in generation, including facial expression and head pose styles. Several works intend to capture the personalities by fine-tuning modules. However, limited training data leads to the lack of vividness. In this work, we propose AdaMesh, a novel adaptive speech-driven facial animation approach, which learns the personalized talking style from a reference video of about 10 seconds and generates vivid facial expressions and head poses. Specifically, we propose mixture-of-low-rank adaptation (MoLoRA) to fine-tune the expression adapter, which efficiently captures the facial expression style. For the personalized pose style, we propose a pose adapter by building a discrete pose prior and retrieving the appropriate style embedding with a semantic-aware pose style matrix without fine-tuning. Extensive experimental results show that our approach outperforms state-of-the-art methods, preserves the talking style in the reference video, and generates vivid facial animation. The supplementary video and code will be available at https://adamesh.github.io.
SAGE-ICP: Semantic Information-Assisted ICP
Authors: Jiaming Cui, Jiming Chen, Liang Li
Subjects: Robotics (cs.RO); Computer Vision and Pattern Recognition (cs.CV)
Abstract
Robust and accurate pose estimation in unknown environments is an essential part of robotic applications. We focus on LiDAR-based point-to-point ICP combined with effective semantic information. This paper proposes a novel semantic information-assisted ICP method named SAGE-ICP, which leverages semantics in odometry. The semantic information for the whole scan is timely and efficiently extracted by a 3D convolution network, and these point-wise labels are deeply involved in every part of the registration, including semantic voxel downsampling, data association, adaptive local map, and dynamic vehicle removal. Unlike previous semantic-aided approaches, the proposed method can improve localization accuracy in large-scale scenes even if the semantic information has certain errors. Experimental evaluations on KITTI and KITTI-360 show that our method outperforms the baseline methods, and improves accuracy while maintaining real-time performance, i.e., runs faster than the sensor frame rate.
Optimizing the Placement of Roadside LiDARs for Autonomous Driving
Authors: Wentao Jiang, Hao Xiang, Xinyu Cai, Runsheng Xu, Jiaqi Ma, Yikang Li, Gim Hee Lee, Si Liu
Subjects: Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
Abstract
Multi-agent cooperative perception is an increasingly popular topic in the field of autonomous driving, where roadside LiDARs play an essential role. However, how to optimize the placement of roadside LiDARs is a crucial but often overlooked problem. This paper proposes an approach to optimize the placement of roadside LiDARs by selecting optimized positions within the scene for better perception performance. To efficiently obtain the best combination of locations, a greedy algorithm based on perceptual gain is proposed, which selects the location that can maximize the perceptual gain sequentially. We define perceptual gain as the increased perceptual capability when a new LiDAR is placed. To obtain the perception capability, we propose a perception predictor that learns to evaluate LiDAR placement using only a single point cloud frame. A dataset named Roadside-Opt is created using the CARLA simulator to facilitate research on the roadside LiDAR placement problem.
Deep ReLU networks and high-order finite element methods II: Chebyshev emulation
Abstract
Expression rates and stability in Sobolev norms of deep ReLU neural networks (NNs) in terms of the number of parameters defining the NN for continuous, piecewise polynomial functions, on arbitrary, finite partitions $\mathcal{T}$ of a bounded interval $(a,b)$ are addressed. Novel constructions of ReLU NN surrogates encoding the approximated functions in terms of Chebyshev polynomial expansion coefficients are developed. Chebyshev coefficients can be computed easily from the values of the function in the Clenshaw--Curtis points using the inverse fast Fourier transform. Bounds on expression rates and stability that are superior to those of constructions based on ReLU NN emulations of monomials considered in [Opschoor, Petersen, Schwab, 2020] are obtained. All emulation bounds are explicit in terms of the (arbitrary) partition of the interval, the target emulation accuracy and the polynomial degree in each element of the partition. ReLU NN emulation error estimates are provided for various classes of functions and norms, commonly encountered in numerical analysis. In particular, we show exponential ReLU emulation rate bounds for analytic functions with point singularities and develop an interface between Chebfun approximations and constructive ReLU NN emulations.
Distilling Efficient Vision Transformers from CNNs for Semantic Segmentation
Authors: Xu Zheng, Yunhao Luo, Pengyuan Zhou, Lin Wang
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
In this paper, we tackle a new problem: how to transfer knowledge from the pre-trained cumbersome yet well-performed CNN-based model to learn a compact Vision Transformer (ViT)-based model while maintaining its learning capacity? Due to the completely different characteristics of ViT and CNN and the long-existing capacity gap between teacher and student models in Knowledge Distillation (KD), directly transferring the cross-model knowledge is non-trivial. To this end, we subtly leverage the visual and linguistic-compatible feature character of ViT (i.e., student), and its capacity gap with the CNN (i.e., teacher) and propose a novel CNN-to-ViT KD framework, dubbed C2VKD. Importantly, as the teacher's features are heterogeneous to those of the student, we first propose a novel visual-linguistic feature distillation (VLFD) module that explores efficient KD among the aligned visual and linguistic-compatible representations. Moreover, due to the large capacity gap between the teacher and student and the inevitable prediction errors of the teacher, we then propose a pixel-wise decoupled distillation (PDD) module to supervise the student under the combination of labels and teacher's predictions from the decoupled target and non-target classes. Experiments on three semantic segmentation benchmark datasets consistently show that the increment of mIoU of our method is over 200% of the SoTA KD methods
An Analysis on Large Language Models in Healthcare: A Case Study of BioBERT
Authors: Shyni Sharaf, V. S. Anoop
Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Abstract
This paper conducts a comprehensive investigation into applying large language models, particularly on BioBERT, in healthcare. It begins with thoroughly examining previous natural language processing (NLP) approaches in healthcare, shedding light on the limitations and challenges these methods face. Following that, this research explores the path that led to the incorporation of BioBERT into healthcare applications, highlighting its suitability for addressing the specific requirements of tasks related to biomedical text mining. The analysis outlines a systematic methodology for fine-tuning BioBERT to meet the unique needs of the healthcare domain. This approach includes various components, including the gathering of data from a wide range of healthcare sources, data annotation for tasks like identifying medical entities and categorizing them, and the application of specialized preprocessing techniques tailored to handle the complexities found in biomedical texts. Additionally, the paper covers aspects related to model evaluation, with a focus on healthcare benchmarks and functions like processing of natural language in biomedical, question-answering, clinical document classification, and medical entity recognition. It explores techniques to improve the model's interpretability and validates its performance compared to existing healthcare-focused language models. The paper thoroughly examines ethical considerations, particularly patient privacy and data security. It highlights the benefits of incorporating BioBERT into healthcare contexts, including enhanced clinical decision support and more efficient information retrieval. Nevertheless, it acknowledges the impediments and complexities of this integration, encompassing concerns regarding data privacy, transparency, resource-intensive requirements, and the necessity for model customization to align with diverse healthcare domains.
Revisiting Android App Categorization
Authors: Marco Alecci, Jordan Samhi, Tegawendé F. Bissyandé, Jacques Klein
Abstract
Numerous tools rely on automatic categorization of Android apps as part of their methodology. However, incorrect categorization can lead to inaccurate outcomes, such as a malware detector wrongly flagging a benign app as malicious. One such example is the SlideIT Free Keyboard app, which has over 500000 downloads on Google Play. Despite being a "Keyboard" app, it is often wrongly categorized alongside "Language" apps due to the app's description focusing heavily on language support, resulting in incorrect analysis outcomes, including mislabeling it as a potential malware when it is actually a benign app. Hence, there is a need to improve the categorization of Android apps to benefit all the tools relying on it. In this paper, we present a comprehensive evaluation of existing Android app categorization approaches using our new ground-truth dataset. Our evaluation demonstrates the notable superiority of approaches that utilize app descriptions over those solely relying on data extracted from the APK file, while also leaving space for potential improvement in the former category. Thus, we propose two innovative approaches that effectively outperform the performance of existing methods in both description-based and APK-based methodologies. Finally, by employing our novel description-based approach, we have successfully demonstrated that adopting a higher-performing categorization method can significantly benefit tools reliant on app categorization, leading to an improvement in their overall performance. This highlights the significance of developing advanced and efficient app categorization methodologies for improved results in software engineering tasks.
Score Regularized Policy Optimization through Diffusion Behavior
Authors: Huayu Chen, Cheng Lu, Zhengyi Wang, Hang Su, Jun Zhu
Abstract
Recent developments in offline reinforcement learning have uncovered the immense potential of diffusion modeling, which excels at representing heterogeneous behavior policies. However, sampling from diffusion policies is considerably slow because it necessitates tens to hundreds of iterative inference steps for one action. To address this issue, we propose to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behavior distribution's score function during optimization. Our method enjoys powerful generative capabilities of diffusion modeling while completely circumventing the computationally intensive and time-consuming diffusion sampling scheme, both during training and evaluation. Extensive results on D4RL tasks show that our method boosts action sampling speed by more than 25 times compared with various leading diffusion-based methods in locomotion tasks, while still maintaining state-of-the-art performance.
Molecule-Edit Templates for Efficient and Accurate Retrosynthesis Prediction
Authors: Mikołaj Sacha, Michał Sadowski, Piotr Kozakowski, Ruard van Workum, Stanisław Jastrzębski
Abstract
Retrosynthesis involves determining a sequence of reactions to synthesize complex molecules from simpler precursors. As this poses a challenge in organic chemistry, machine learning has offered solutions, particularly for predicting possible reaction substrates for a given target molecule. These solutions mainly fall into template-based and template-free categories. The former is efficient but relies on a vast set of predefined reaction patterns, while the latter, though more flexible, can be computationally intensive and less interpretable. To address these issues, we introduce METRO (Molecule-Edit Templates for RetrOsynthesis), a machine-learning model that predicts reactions using minimal templates - simplified reaction patterns capturing only essential molecular changes - reducing computational overhead and achieving state-of-the-art results on standard benchmarks.
A webcam-based machine learning approach for three-dimensional range of motion evaluation
Authors: Xiaoye Michael Wang, Derek T. Smith, Qin Zhu
Subjects: Human-Computer Interaction (cs.HC); Computer Vision and Pattern Recognition (cs.CV)
Abstract
Background. Joint range of motion (ROM) is an important quantitative measure for physical therapy. Commonly relying on a goniometer, accurate and reliable ROM measurement requires extensive training and practice. This, in turn, imposes a significant barrier for those who have limited in-person access to healthcare. Objective. The current study presents and evaluates an alternative machine learning-based ROM evaluation method that could be remotely accessed via a webcam. Methods. To evaluate its reliability, the ROM measurements for a diverse set of joints (neck, spine, and upper and lower extremities) derived using this method were compared to those obtained from a marker-based optical motion capture system. Results. Data collected from 25 healthy adults demonstrated that the webcam solution exhibited high test-retest reliability, with substantial to almost perfect intraclass correlation coefficients for most joints. Compared with the marker-based system, the webcam-based system demonstrated substantial to almost perfect inter-rater reliability for some joints, and lower inter-rater reliability for other joints (e.g., shoulder flexion and elbow flexion), which could be attributed to the reduced sensitivity to joint locations at the apex of the movement. Conclusions. The proposed webcam-based method exhibited high test-retest and inter-rater reliability, making it a versatile alternative for existing ROM evaluation methods in clinical practice and the tele-implementation of physical therapy and rehabilitation.
Multichannel consecutive data cross-extraction with 1DCNN-attention for diagnosis of power transformer
Abstract
Power transformer plays a critical role in grid infrastructure, and its diagnosis is paramount for maintaining stable operation. However, the current methods for transformer diagnosis focus on discrete dissolved gas analysis, neglecting deep feature extraction of multichannel consecutive data. The unutilized sequential data contains the significant temporal information reflecting the transformer condition. In light of this, the structure of multichannel consecutive data cross-extraction (MCDC) is proposed in this article in order to comprehensively exploit the intrinsic characteristic and evaluate the states of transformer. Moreover, for the better accommodation in scenario of transformer diagnosis, one dimensional convolution neural network attention (1DCNN-attention) mechanism is introduced and offers a more efficient solution given the simplified spatial complexity. Finally, the effectiveness of MCDC and the superior generalization ability, compared with other algorithms, are validated in experiments conducted on a dataset collected from real operation cases of power transformer. Additionally, the better stability of 1DCNN-attention has also been certified.
An Empirical Study of Instruction-tuning Large Language Models in Chinese
Abstract
The success of ChatGPT validates the potential of large language models (LLMs) in artificial general intelligence (AGI). Subsequently, the release of LLMs has sparked the open-source community's interest in instruction-tuning, which is deemed to accelerate ChatGPT's replication process. However, research on instruction-tuning LLMs in Chinese, the world's most spoken language, is still in its early stages. Therefore, this paper makes an in-depth empirical study of instruction-tuning LLMs in Chinese, which can serve as a cookbook that provides valuable findings for effectively customizing LLMs that can better respond to Chinese instructions. Specifically, we systematically explore the impact of LLM bases, parameter-efficient methods, instruction data types, which are the three most important elements for instruction-tuning. Besides, we also conduct experiment to study the impact of other factors, e.g., chain-of-thought data and human-value alignment. We hope that this empirical study can make a modest contribution to the open Chinese version of ChatGPT. This paper will release a powerful Chinese LLMs that is comparable to ChatGLM. The code and data are available at https://github.com/PhoebusSi/Alpaca-CoT.
Choosing optimal parameters for a distributed multi-constrained QoS routing
Abstract
We consider several basic questions on distributed routing in directed graphs with multiple additive costs, or metrics, and multiple constraints. Distributed routing in this sense is used in several protocols, such as IS-IS and OSPF. A practical approach to the multi-constraint routing problem is to, first, combine the metrics into a single composite' metric, and then apply one-to-all shortest path algorithms, e.g. Dijkstra, in order to find shortest path trees. We show that, in general, even if a feasible path exists and is known for every source and destination pair, it is impossible to guarantee a distributed routing under several constraints. We also study the question of choosing the optimalcomposite' metric. We show that under certain mathematical assumptions we can efficiently find a convex combination of several metrics that maximizes the number of discovered feasible paths. Sometimes it can be done analytically, and is in general possible using what we call a 'smart iterative approach'. We illustrate these findings by extensive experiments on several typical network topologies.
Improved Analysis of Sparse Linear Regression in Local Differential Privacy Model
Authors: Liyang Zhu, Meng Ding, Vaneet Aggarwal, Jinhui Xu, Di Wang
Abstract
In this paper, we revisit the problem of sparse linear regression in the local differential privacy (LDP) model. Existing research in the non-interactive and sequentially local models has focused on obtaining the lower bounds for the case where the underlying parameter is $1$-sparse, and extending such bounds to the more general $k$-sparse case has proven to be challenging. Moreover, it is unclear whether efficient non-interactive LDP (NLDP) algorithms exist. To address these issues, we first consider the problem in the $\epsilon$ non-interactive LDP model and provide a lower bound of $\Omega(\frac{\sqrt{dk\log d}}{\sqrt{n}\epsilon})$ on the $\ell_2$-norm estimation error for sub-Gaussian data, where $n$ is the sample size and $d$ is the dimension of the space. We propose an innovative NLDP algorithm, the very first of its kind for the problem. As a remarkable outcome, this algorithm also yields a novel and highly efficient estimator as a valuable by-product. Our algorithm achieves an upper bound of $\tilde{O}({\frac{d\sqrt{k}}{\sqrt{n}\epsilon}})$ for the estimation error when the data is sub-Gaussian, which can be further improved by a factor of $O(\sqrt{d})$ if the server has additional public but unlabeled data. For the sequentially interactive LDP model, we show a similar lower bound of $\Omega({\frac{\sqrt{dk}}{\sqrt{n}\epsilon}})$. As for the upper bound, we rectify a previous method and show that it is possible to achieve a bound of $\tilde{O}(\frac{k\sqrt{d}}{\sqrt{n}\epsilon})$. Our findings reveal fundamental differences between the non-private case, central DP model, and local DP model in the sparse linear regression problem.
LESS-Map: Lightweight and Evolving Semantic Map in Parking Lots for Long-term Self-Localization
Abstract
Precise and long-term stable localization is essential in parking lots for tasks like autonomous driving or autonomous valet parking, \textit{etc}. Existing methods rely on a fixed and memory-inefficient map, which lacks robust data association approaches. And it is not suitable for precise localization or long-term map maintenance. In this paper, we propose a novel mapping, localization, and map update system based on ground semantic features, utilizing low-cost cameras. We present a precise and lightweight parameterization method to establish improved data association and achieve accurate localization at centimeter-level. Furthermore, we propose a novel map update approach by implementing high-quality data association for parameterized semantic features, allowing continuous map update and refinement during re-localization, while maintaining centimeter-level accuracy. We validate the performance of the proposed method in real-world experiments and compare it against state-of-the-art algorithms. The proposed method achieves an average accuracy improvement of 5cm during the registration process. The generated maps consume only a compact size of 450 KB/km and remain adaptable to evolving environments through continuous update.
Deep Kernel and Image Quality Estimators for Optimizing Robotic Ultrasound Controller using Bayesian Optimization
Authors: Deepak Raina, SH Chandrashekhara, Richard Voyles, Juan Wachs, Subir Kumar Saha
Abstract
Ultrasound is a commonly used medical imaging modality that requires expert sonographers to manually maneuver the ultrasound probe based on the acquired image. Autonomous Robotic Ultrasound (A-RUS) is an appealing alternative to this manual procedure in order to reduce sonographers' workload. The key challenge to A-RUS is optimizing the ultrasound image quality for the region of interest across different patients. This requires knowledge of anatomy, recognition of error sources and precise probe position, orientation and pressure. Sample efficiency is important while optimizing these parameters associated with the robotized probe controller. Bayesian Optimization (BO), a sample-efficient optimization framework, has recently been applied to optimize the 2D motion of the probe. Nevertheless, further improvements are needed to improve the sample efficiency for high-dimensional control of the probe. We aim to overcome this problem by using a neural network to learn a low-dimensional kernel in BO, termed as Deep Kernel (DK). The neural network of DK is trained using probe and image data acquired during the procedure. The two image quality estimators are proposed that use a deep convolution neural network and provide real-time feedback to the BO. We validated our framework using these two feedback functions on three urinary bladder phantoms. We obtained over 50% increase in sample efficiency for 6D control of the robotized probe. Furthermore, our results indicate that this performance enhancement in BO is independent of the specific training dataset, demonstrating inter-patient adaptability.
IRS Assisted Federated Learning A Broadband Over-the-Air Aggregation Approach
Authors: Deyou Zhang, Ming Xiao, Zhibo Pang, Lihui Wang, H. Vincent Poor
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
Abstract
We consider a broadband over-the-air computation empowered model aggregation approach for wireless federated learning (FL) systems and propose to leverage an intelligent reflecting surface (IRS) to combat wireless fading and noise. We first investigate the conventional node-selection based framework, where a few edge nodes are dropped in model aggregation to control the aggregation error. We analyze the performance of this node-selection based framework and derive an upper bound on its performance loss, which is shown to be related to the selected edge nodes. Then, we seek to minimize the mean-squared error (MSE) between the desired global gradient parameters and the actually received ones by optimizing the selected edge nodes, their transmit equalization coefficients, the IRS phase shifts, and the receive factors of the cloud server. By resorting to the matrix lifting technique and difference-of-convex programming, we successfully transform the formulated optimization problem into a convex one and solve it using off-the-shelf solvers. To improve learning performance, we further propose a weight-selection based FL framework. In such a framework, we assign each edge node a proper weight coefficient in model aggregation instead of discarding any of them to reduce the aggregation error, i.e., amplitude alignment of the received local gradient parameters from different edge nodes is not required. We also analyze the performance of this weight-selection based framework and derive an upper bound on its performance loss, followed by minimizing the MSE via optimizing the weight coefficients of the edge nodes, their transmit equalization coefficients, the IRS phase shifts, and the receive factors of the cloud server. Furthermore, we use the MNIST dataset for simulations to evaluate the performance of both node-selection and weight-selection based FL frameworks.
A Novel Voronoi-based Convolutional Neural Network Framework for Pushing Person Detection in Crowd Videos
Authors: Ahmed Alia, Mohammed Maree, Mohcine Chraibi, Armin Seyfried
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Analyzing the microscopic dynamics of pushing behavior within crowds can offer valuable insights into crowd patterns and interactions. By identifying instances of pushing in crowd videos, a deeper understanding of when, where, and why such behavior occurs can be achieved. This knowledge is crucial to creating more effective crowd management strategies, optimizing crowd flow, and enhancing overall crowd experiences. However, manually identifying pushing behavior at the microscopic level is challenging, and the existing automatic approaches cannot detect such microscopic behavior. Thus, this article introduces a novel automatic framework for identifying pushing in videos of crowds on a microscopic level. The framework comprises two main components: i) Feature extraction and ii) Video labeling. In the feature extraction component, a new Voronoi-based method is developed for determining the local regions associated with each person in the input video. Subsequently, these regions are fed into EfficientNetV1B0 Convolutional Neural Network to extract the deep features of each person over time. In the second component, a combination of a fully connected layer with a Sigmoid activation function is employed to analyze these deep features and annotate the individuals involved in pushing within the video. The framework is trained and evaluated on a new dataset created using six real-world experiments, including their corresponding ground truths. The experimental findings indicate that the suggested framework outperforms seven baseline methods that are employed for comparative analysis purposes.
Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages
Authors: Guozheng Ma, Lu Li, Sen Zhang, Zixuan Liu, Zhen Wang, Yixin Chen, Li Shen, Xueqian Wang, Dacheng Tao
Abstract
Plasticity, the ability of a neural network to evolve with new data, is crucial for high-performance and sample-efficient visual reinforcement learning (VRL). Although methods like resetting and regularization can potentially mitigate plasticity loss, the influences of various components within the VRL framework on the agent's plasticity are still poorly understood. In this work, we conduct a systematic empirical exploration focusing on three primary underexplored facets and derive the following insightful conclusions: (1) data augmentation is essential in maintaining plasticity; (2) the critic's plasticity loss serves as the principal bottleneck impeding efficient training; and (3) without timely intervention to recover critic's plasticity in the early stages, its loss becomes catastrophic. These insights suggest a novel strategy to address the high replay ratio (RR) dilemma, where exacerbated plasticity loss hinders the potential improvements of sample efficiency brought by increased reuse frequency. Rather than setting a static RR for the entire training process, we propose Adaptive RR, which dynamically adjusts the RR based on the critic's plasticity level. Extensive evaluations indicate that Adaptive RR not only avoids catastrophic plasticity loss in the early stages but also benefits from more frequent reuse in later phases, resulting in superior sample efficiency.
Analytical Die-to-Die 3D Placement with Bistratal Wirelength Model and GPU Acceleration
Abstract
In this paper, we present a new analytical 3D placement framework with a bistratal wirelength model for F2F-bonded 3D ICs with heterogeneous technology nodes based on the electrostatic-based density model. The proposed framework, enabled GPU-acceleration, is capable of efficiently determining node partitioning and locations simultaneously, leveraging the dedicated 3D wirelength model and density model. The experimental results on ICCAD 2022 contest benchmarks demonstrate that our proposed 3D placement framework can achieve up to 6.1% wirelength improvement and 4.1% on average compared to the first-place winner with much fewer vertical interconnections and up to 9.8x runtime speedup. Notably, the proposed framework also outperforms the state-of-the-art 3D analytical placer by up to 3.3% wirelength improvement and 2.1% on average with up to 8.8x acceleration on large cases using GPUs.
Distance-based Weighted Transformer Network for Image Completion
Abstract
The challenge of image generation has been effectively modeled as a problem of structure priors or transformation. However, existing models have unsatisfactory performance in understanding the global input image structures because of particular inherent features (for example, local inductive prior). Recent studies have shown that self-attention is an efficient modeling technique for image completion problems. In this paper, we propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components. In our model, we leverage the strengths of both Convolutional Neural Networks (CNNs) and DWT blocks to enhance the image completion process. Specifically, CNNs are used to augment the local texture information of coarse priors and DWT blocks are used to recover certain coarse textures and coherent visual structures. Unlike current approaches that generally use CNNs to create feature maps, we use the DWT to encode global dependencies and compute distance-based weighted feature maps, which substantially minimizes the problem of visual ambiguities. Meanwhile, to better produce repeated textures, we introduce Residual Fast Fourier Convolution (Res-FFC) blocks to combine the encoder's skip features with the coarse features provided by our generator. Furthermore, a simple yet effective technique is proposed to normalize the non-zero values of convolutions, and fine-tune the network layers for regularization of the gradient norms to provide an efficient training stabiliser. Extensive quantitative and qualitative experiments on three challenging datasets demonstrate the superiority of our proposed model compared to existing approaches.
Efficient machine-learning surrogates for large-scale geological carbon and energy storage
Authors: Teeratorn Kadeethum, Stephen J. Verzi, Hongkyu Yoon
Abstract
Geological carbon and energy storage are pivotal for achieving net-zero carbon emissions and addressing climate change. However, they face uncertainties due to geological factors and operational limitations, resulting in possibilities of induced seismic events or groundwater contamination. To overcome these challenges, we propose a specialized machine-learning (ML) model to manage extensive reservoir models efficiently. While ML approaches hold promise for geological carbon storage, the substantial computational resources required for large-scale analysis are the obstacle. We've developed a method to reduce the training cost for deep neural operator models, using domain decomposition and a topology embedder to link spatio-temporal points. This approach allows accurate predictions within the model's domain, even for untrained data, enhancing ML efficiency for large-scale geological storage applications.
Spike-time encoding of gas concentrations using neuromorphic analog sensory front-end
Authors: Shavika Rastogi, Nik Dennler, Michael Schmuker, André van Schaik
Subjects: Neural and Evolutionary Computing (cs.NE)
Abstract
Gas concentration detection is important for applications such as gas leakage monitoring. Metal Oxide (MOx) sensors show high sensitivities for specific gases, which makes them particularly useful for such monitoring applications. However, how to efficiently sample and further process the sensor responses remains an open question. Here we propose a simple analog circuit design inspired by the spiking output of the mammalian olfactory bulb and by event-based vision sensors. Our circuit encodes the gas concentration in the time difference between the pulses of two separate pathways. We show that in the setting of controlled airflow-embedded gas injections, the time difference between the two generated pulses varies inversely with gas concentration, which is in agreement with the spike timing difference between tufted cells and mitral cells of the mammalian olfactory bulb. Encoding concentration information in analog spike timings may pave the way for rapid and efficient gas detection, and ultimately lead to data- and power-efficient monitoring devices to be deployed in uncontrolled and turbulent environments.
Multimodal Graph Learning for Generative Tasks
Authors: Minji Yoon, Jing Yu Koh, Bryan Hooi, Ruslan Salakhutdinov
Abstract
Multimodal learning combines multiple data modalities, broadening the types and complexity of data our models can utilize: for example, from plain text to image-caption pairs. Most multimodal learning algorithms focus on modeling simple one-to-one pairs of data from two modalities, such as image-caption pairs, or audio-text pairs. However, in most real-world settings, entities of different modalities interact with each other in more complex and multifaceted ways, going beyond one-to-one mappings. We propose to represent these complex relationships as graphs, allowing us to capture data with any number of modalities, and with complex relationships between modalities that can flexibly vary from one sample to another. Toward this goal, we propose Multimodal Graph Learning (MMGL), a general and systematic framework for capturing information from multiple multimodal neighbors with relational structures among them. In particular, we focus on MMGL for generative tasks, building upon pretrained Language Models (LMs), aiming to augment their text generation with multimodal neighbor contexts. We study three research questions raised by MMGL: (1) how can we infuse multiple neighbor information into the pretrained LMs, while avoiding scalability issues? (2) how can we infuse the graph structure information among multimodal neighbors into the LMs? and (3) how can we finetune the pretrained LMs to learn from the neighbor context in a parameter-efficient manner? We conduct extensive experiments to answer these three questions on MMGL and analyze the empirical results to pave the way for future MMGL research.
Solving Semi-Discrete Optimal Transport Problems: star shapedeness and Newton's method
Authors: Luca Dieci, Daniyar Omarov
Subjects: Numerical Analysis (math.NA); Optimization and Control (math.OC)
Abstract
In this work, we propose a novel implementation of Newton's method for solving semi-discrete optimal transport (OT) problems for cost functions which are a positive combination of $p$-norms, $1<p<\infty$. It is well understood that the solution of a semi-discrete OT problem is equivalent to finding a partition of a bounded region in Laguerre cells, and we prove that the Laguerre cells are star-shaped with respect to the target points. By exploiting the geometry of the Laguerre cells, we obtain an efficient and reliable implementation of Newton's method to find the sought network structure. We provide implementation details and extensive results in support of our technique in 2-d problems, as well as comparison with other approaches used in the literature.
Model-based Clustering of Individuals' Ecological Momentary Assessment Time-series Data for Improving Forecasting Performance
Authors: Mandani Ntekouli, Gerasimos Spanakis, Lourens Waldorp, Anne Roefs
Abstract
Through Ecological Momentary Assessment (EMA) studies, a number of time-series data is collected across multiple individuals, continuously monitoring various items of emotional behavior. Such complex data is commonly analyzed in an individual level, using personalized models. However, it is believed that additional information of similar individuals is likely to enhance these models leading to better individuals' description. Thus, clustering is investigated with an aim to group together the most similar individuals, and subsequently use this information in group-based models in order to improve individuals' predictive performance. More specifically, two model-based clustering approaches are examined, where the first is using model-extracted parameters of personalized models, whereas the second is optimized on the model-based forecasting performance. Both methods are then analyzed using intrinsic clustering evaluation measures (e.g. Silhouette coefficients) as well as the performance of a downstream forecasting scheme, where each forecasting group-model is devoted to describe all individuals belonging to one cluster. Among these, clustering based on performance shows the best results, in terms of all examined evaluation measures. As another level of evaluation, those group-models' performance is compared to three baseline scenarios, the personalized, the all-in-one group and the random group-based concept. According to this comparison, the superiority of clustering-based methods is again confirmed, indicating that the utilization of group-based information could be effectively enhance the overall performance of all individuals' data.
Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Abstract
Given a real-world dataset, data condensation (DC) aims to synthesize a significantly smaller dataset that captures the knowledge of this dataset for model training with high performance. Recent works propose to enhance DC with data parameterization, which condenses data into parameterized data containers rather than pixel space. The intuition behind data parameterization is to encode shared features of images to avoid additional storage costs. In this paper, we recognize that images share common features in a hierarchical way due to the inherent hierarchical structure of the classification system, which is overlooked by current data parameterization methods. To better align DC with this hierarchical nature and encourage more efficient information sharing inside data containers, we propose a novel data parameterization architecture, Hierarchical Memory Network (HMN). HMN stores condensed data in a three-tier structure, representing the dataset-level, class-level, and instance-level features. Another helpful property of the hierarchical architecture is that HMN naturally ensures good independence among images despite achieving information sharing. This enables instance-level pruning for HMN to reduce redundant information, thereby further minimizing redundancy and enhancing performance. We evaluate HMN on four public datasets (SVHN, CIFAR10, CIFAR100, and Tiny-ImageNet) and compare HMN with eight DC baselines. The evaluation results show that our proposed method outperforms all baselines, even when trained with a batch-based loss consuming less GPU memory.
Human-Centered Evaluation of XAI Methods
Authors: Karam Dawoud, Wojciech Samek, Sebastian Lapuschkin, Sebastian Bosse
Abstract
In the ever-evolving field of Artificial Intelligence, a critical challenge has been to decipher the decision-making processes within the so-called "black boxes" in deep learning. Over recent years, a plethora of methods have emerged, dedicated to explaining decisions across diverse tasks. Particularly in tasks like image classification, these methods typically identify and emphasize the pivotal pixels that most influence a classifier's prediction. Interestingly, this approach mirrors human behavior: when asked to explain our rationale for classifying an image, we often point to the most salient features or aspects. Capitalizing on this parallel, our research embarked on a user-centric study. We sought to objectively measure the interpretability of three leading explanation methods: (1) Prototypical Part Network, (2) Occlusion, and (3) Layer-wise Relevance Propagation. Intriguingly, our results highlight that while the regions spotlighted by these methods can vary widely, they all offer humans a nearly equivalent depth of understanding. This enables users to discern and categorize images efficiently, reinforcing the value of these methods in enhancing AI transparency.
Building hierarchies of semiclassical Jacobi polynomials for spectral methods in annuli
Authors: Ioannis P. A. Papadopoulos, Timon S. Gutleb, Richard M. Slevinsky, Sheehan Olver
Abstract
We discuss computing with hierarchies of families of (potentially weighted) semiclassical Jacobi polynomials which arise in the construction of multivariate orthogonal polynomials. In particular, we outline how to build connection and differentiation matrices with optimal complexity and compute analysis and synthesis operations in quasi-optimal complexity. We investigate a particular application of these results to constructing orthogonal polynomials in annuli, called the generalised Zernike annular polynomials, which lead to sparse discretisations of partial differential equations. We compare against a scaled-and-shifted Chebyshev--Fourier series showing that in general the annular polynomials converge faster when approximating smooth functions and have better conditioning. We also construct a sparse spectral element method by combining disk and annulus cells, which is highly effective for solving PDEs with radially discontinuous variable coefficients and data.
Third order tensor-oriented directional splitting for exponential integrators
Abstract
Suitable discretizations through tensor product formulas of popular multidimensional operators (diffusion--advection, for instance) lead to matrices with $d$-dimensional Kronecker sum structure. For evolutionary PDEs containing such operators and integrated in time with exponential integrators, it is of paramount importance to efficiently approximate actions of $\varphi$-functions of this kind of matrices. In this work, we show how to produce directional split approximations of third order with respect to the time step size. They conveniently employ tensor-matrix products (realized with highly performance level 3 BLAS) and that allow for the effective usage in practice of exponential integrators up to order three. The approach has been successfully tested against state-of-the-art techniques on two well-known physical models, namely FitzHugh--Nagumo and Schnakenberg.
In-Context Unlearning: Language Models as Few Shot Unlearners
Authors: Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju
Abstract
Machine unlearning, the study of efficiently removing the impact of specific training points on the trained model, has garnered increased attention of late, driven by the need to comply with privacy regulations like the \emph{Right to be Forgotten}. Although unlearning is particularly relevant for LLMs in light of the copyright issues they raise, achieving precise unlearning is computationally infeasible for very large models. To this end, recent work has proposed several algorithms which approximate the removal of training data without retraining the model. These algorithms crucially rely on access to the model parameters in order to update them, an assumption that may not hold in practice due to computational constraints or when the LLM is accessed via API. In this work, we propose a new class of unlearning methods for LLMs we call ``In-Context Unlearning'', providing inputs in context and without having to update model parameters. To unlearn a particular training instance, we provide the instance alongside a flipped label and additional correctly labelled instances which are prepended as inputs to the LLM at inference time. Our experimental results demonstrate that these contexts effectively remove specific information from the training set while maintaining performance levels that are competitive with (or in some cases exceed) state-of-the-art unlearning methods that require access to the LLM parameters.
Qlarify: Bridging Scholarly Abstracts and Papers with Recursively Expandable Summaries
Authors: Raymond Fok, Joseph Chee Chang, Tal August, Amy X. Zhang, Daniel S. Weld
Abstract
As scientific literature has grown exponentially, researchers often rely on paper triaging strategies such as browsing abstracts before deciding to delve into a paper's full text. However, when an abstract is insufficient, researchers are required to navigate an informational chasm between 150-word abstracts and 10,000-word papers. To bridge that gap, we introduce the idea of recursively expandable summaries and present Qlarify, an interactive system that allows users to recursively expand an abstract by progressively incorporating additional information from a paper's full text. Starting from an abstract, users can brush over summary text to specify targeted information needs or select AI-suggested entities in the text. Responses are then generated on-demand by an LLM and appear in the form of a fluid, threaded expansion of the existing text. Each generated summary can be efficiently verified through attribution to a relevant source-passage in the paper. Through an interview study (n=9) and a field deployment (n=275) at a research conference, we use Qlarify as a technology probe to elaborate upon the expandable summaries design space, highlight how scholars benefit from Qlarify's expandable abstracts, and identify future opportunities to support low-effort and just-in-time exploration of scientific documents $\unicode{x2013}$ and other information spaces $\unicode{x2013}$ through LLM-powered interactions.
Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models
Authors: Luiza Pozzobon, Beyza Ermis, Patrick Lewis, Sara Hooker
Abstract
Considerable effort has been dedicated to mitigating toxicity, but existing methods often require drastic modifications to model parameters or the use of computationally intensive auxiliary models. Furthermore, previous approaches have often neglected the crucial factor of language's evolving nature over time. In this work, we present a comprehensive perspective on toxicity mitigation that takes into account its changing nature. We introduce Goodtriever, a flexible methodology that matches the current state-of-the-art toxicity mitigation while achieving 43% relative latency reduction during inference and being more computationally efficient. By incorporating a retrieval-based approach at decoding time, Goodtriever enables toxicity-controlled text generation. Our research advocates for an increased focus on adaptable mitigation techniques, which better reflect the data drift models face when deployed in the wild. Code and data are available at https://github.com/for-ai/goodtriever.
Transformers for Green Semantic Communication: Less Energy, More Semantics
Authors: Shubhabrata Mukherjee, Cory Beard, Sejun Song (School of Science and Engineering, University of Missouri-Kansas City, Kansas City, MO, USA)
Subjects: Machine Learning (cs.LG); Networking and Internet Architecture (cs.NI)
Abstract
Semantic communication aims to transmit meaningful and effective information rather than focusing on individual symbols or bits, resulting in benefits like reduced latency, bandwidth usage, and higher throughput compared to traditional communication. However, semantic communication poses significant challenges due to the need for universal metrics for benchmarking the joint effects of semantic information loss and practical energy consumption. This research presents a novel multi-objective loss function named "Energy-Optimized Semantic Loss" (EOSL), addressing the challenge of balancing semantic information loss and energy consumption. Through comprehensive experiments on transformer models, including CPU and GPU energy usage, it is demonstrated that EOSL-based encoder model selection can save up to 90\% of energy while achieving a 44\% improvement in semantic similarity performance during inference in this experiment. This work paves the way for energy-efficient neural network selection and the development of greener semantic communication architectures.
Abstract
In many interactive decision-making settings, there is latent and unobserved information that remains fixed. Consider, for example, a dialogue system, where complete information about a user, such as the user's preferences, is not given. In such an environment, the latent information remains fixed throughout each episode, since the identity of the user does not change during an interaction. This type of environment can be modeled as a Latent Markov Decision Process (LMDP), a special instance of Partially Observed Markov Decision Processes (POMDPs). Previous work established exponential lower bounds in the number of latent contexts for the LMDP class. This puts forward a question: under which natural assumptions a near-optimal policy of an LMDP can be efficiently learned? In this work, we study the class of LMDPs with {\em prospective side information}, when an agent receives additional, weakly revealing, information on the latent context at the beginning of each episode. We show that, surprisingly, this problem is not captured by contemporary settings and algorithms designed for partially observed environments. We then establish that any sample efficient algorithm must suffer at least $\Omega(K^{2/3})$-regret, as opposed to standard $\Omega(\sqrt{K})$ lower bounds, and design an algorithm with a matching upper bound.
An Explicit Local Space-Time Adaptive Framework for Monodomain Models
Authors: Dennis Ogiermann, Daniel Balzani, Luigi E. Perotti
Abstract
We present a new explicit local space-time adaptive framework to decrease the time required for monodomain simulations for cardiac electrophysiology. Based on the localized structure of the steep activation wavefront in solutions to monodomain problems, the proposed framework adopts small time steps and a tree-based adaptive mesh refinement scheme only in the regions necessary to resolve these localized structures. The time step and mesh adaptation selection process is fully controlled by a combination of local error indicators. The main contributions of this work consist in the introduction of a primal symmetric interior penalty formulation of the monodomain model and an efficient algorithmic strategy to manage local time stepping for its temporal discretization. In a first serial implementation of this framework, we report decreases in wall-clock time between 2 and 20 times with respect to an optimized implementation of a commonly used numerical scheme, showing that this framework is a promising candidate to accelerate monodomain simulations of cardiac electrophysiology.
AG-CVG: Coverage Planning with a Mobile Recharging UGV and an Energy-Constrained UAV
Abstract
In this paper, we present an approach for coverage path planning for a team of an energy-constrained Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV). Both the UAV and the UGV have predefined areas that they have to cover. The goal is to perform complete coverage by both robots while minimizing the coverage time. The UGV can also serve as a mobile recharging station. The UAV and UGV need to occasionally rendezvous for recharging. We propose a heuristic method to address this NP-Hard planning problem. Our approach involves initially determining coverage paths without factoring in energy constraints. Subsequently, we cluster segments of these paths and employ graph matching to assign UAV clusters to UGV clusters for efficient recharging management. We perform numerical analysis on real-world coverage applications and show that compared with a greedy approach our method reduces rendezvous overhead on average by 11.33\%. We demonstrate proof-of-concept with a team of a VOXL m500 drone and a Clearpath Jackal ground vehicle, providing a complete system from the offline algorithm to the field execution.
The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values
Authors: Hannah Rose Kirk, Andrew M. Bean, Bertie Vidgen, Paul Röttger, Scott A. Hale
Subjects: Computation and Language (cs.CL); Computers and Society (cs.CY)
Abstract
Human feedback is increasingly used to steer the behaviours of Large Language Models (LLMs). However, it is unclear how to collect and incorporate feedback in a way that is efficient, effective and unbiased, especially for highly subjective human preferences and values. In this paper, we survey existing approaches for learning from human feedback, drawing on 95 papers primarily from the ACL and arXiv repositories.First, we summarise the past, pre-LLM trends for integrating human feedback into language models. Second, we give an overview of present techniques and practices, as well as the motivations for using feedback; conceptual frameworks for defining values and preferences; and how feedback is collected and from whom. Finally, we encourage a better future of feedback learning in LLMs by raising five unresolved conceptual and practical challenges.
Differentiable Euler Characteristic Transforms for Shape Classification
Abstract
The Euler Characteristic Transform (ECT) has proven to be a powerful representation, combining geometrical and topological characteristics of shapes and graphs. However, the ECT was hitherto unable to learn task-specific representations. We overcome this issue and develop a novel computational layer that enables learning the ECT in an end-to-end fashion. Our method DECT is fast and computationally efficient, while exhibiting performance on a par with more complex models in both graph and point cloud classification tasks. Moreover, we show that this seemingly unexpressive statistic still provides the same topological expressivity as more complex topological deep learning layers provide.
Prompt Backdoors in Visual Prompt Learning
Authors: Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang
Subjects: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Fine-tuning large pre-trained computer vision models is infeasible for resource-limited users. Visual prompt learning (VPL) has thus emerged to provide an efficient and flexible alternative to model fine-tuning through Visual Prompt as a Service (VPPTaaS). Specifically, the VPPTaaS provider optimizes a visual prompt given downstream data, and downstream users can use this prompt together with the large pre-trained model for prediction. However, this new learning paradigm may also pose security risks when the VPPTaaS provider instead provides a malicious visual prompt. In this paper, we take the first step to explore such risks through the lens of backdoor attacks. Specifically, we propose BadVisualPrompt, a simple yet effective backdoor attack against VPL. For example, poisoning $5\%$ CIFAR10 training data leads to above $99\%$ attack success rates with only negligible model accuracy drop by $1.5\%$. In particular, we identify and then address a new technical challenge related to interactions between the backdoor trigger and visual prompt, which does not exist in conventional, model-level backdoors. Moreover, we provide in-depth analyses of seven backdoor defenses from model, prompt, and input levels. Overall, all these defenses are either ineffective or impractical to mitigate our BadVisualPrompt, implying the critical vulnerability of VPL.
Polytopal discontinuous Galerkin discretization of brain multiphysics flow dynamics
Authors: Ivan Fumagalli, Mattia Corti, Nicola Parolini, Paola F. Antonietti
Abstract
A comprehensive mathematical model of the multiphysics flow of blood and Cerebrospinal Fluid (CSF) in the brain can be expressed as the coupling of a poromechanics system and Stokes' equations: the first describes fluids filtration through the cerebral tissue and the tissue's elastic response, while the latter models the flow of the CSF in the brain ventricles. This model describes the functioning of the brain's waste clearance mechanism, which has been recently discovered to play an essential role in the progress of neurodegenerative diseases. To model the interactions between different scales in the porous medium, we propose a physically consistent coupling between Multi-compartment Poroelasticity (MPE) equations and Stokes' equations. In this work, we introduce a numerical scheme for the discretization of such coupled MPE-Stokes system, employing a high-order discontinuous Galerkin method on polytopal grids to efficiently account for the geometric complexity of the domain. We analyze the stability and convergence of the space semidiscretized formulation, we prove a-priori error estimates, and we present a temporal discretization based on a combination of Newmark's $\beta$-method for the elastic wave equation and the $\theta$-method for the other equations of the model. Numerical simulations carried out on test cases with manufactured solutions validate the theoretical error estimates. We also present numerical results on a two-dimensional slice of a patient-specific brain geometry reconstructed from diagnostic images, to test in practice the advantages of the proposed approach.
Hybrid System Stability Analysis of Multi-Lane Mixed-Autonomy Traffic
Abstract
Autonomous vehicles (AVs) hold vast potential to enhance transportation systems by reducing congestion, improving safety, and lowering emissions. AV controls lead to emergent traffic phenomena; one such intriguing phenomenon is traffic breaks (rolling roadblocks), where a single AV efficiently stabilizes multiple lanes through frequent lane switching, similar to the highway patrolling officers weaving across multiple lanes during difficult traffic conditions. While previous theoretical studies focus on single-lane mixed-autonomy systems, this work proposes a stability analysis framework for multi-lane systems under AV controls. Casting this problem into the hybrid system paradigm, the proposed analysis integrates continuous vehicle dynamics and discrete jumps from AV lane-switches. Through examining the influence of the lane-switch frequency on the system's stability, the analysis offers a principled explanation to the traffic break phenomena, and further discovers opportunities for less-intrusive traffic smoothing by employing less frequent lane-switching. The analysis further facilitates the design of traffic-aware AV lane-switch strategies to enhance system stability. Numerical analysis reveals a strong alignment between the theory and simulation, validating the effectiveness of the proposed stability framework in analyzing multi-lane mixed-autonomy traffic systems.
DiPmark: A Stealthy, Efficient and Resilient Watermark for Large Language Models
Abstract
Watermarking techniques offer a promising way to secure data via embedding covert information into the data. A paramount challenge in the domain lies in preserving the distribution of original data during watermarking. Our research extends and refines existing watermarking framework, placing emphasis on the importance of a distribution-preserving (DiP) watermark. Contrary to the current strategies, our proposed DiPmark preserves the original token distribution during watermarking (stealthy), is detectable without access to the language model API or weights (efficient), and is robust to moderate changes of tokens (resilient). This is achieved by incorporating a novel reweight strategy, combined with a hash function that assigns unique \textit{i.i.d.} ciphers based on the context. The empirical benchmarks of our approach underscore its stealthiness, efficiency, and resilience, making it a robust solution for watermarking tasks that demand impeccable quality preservation.
Keyword: faster
A quantum annealing-sequential quadratic programming assisted finite element simulation for non-linear and history-dependent mechanical problems
Abstract
We propose a framework to solve non-linear and history-dependent mechanical problems based on a hybrid classical computer-quantum annealer approach. Quantum Computers are anticipated to solve particular operations exponentially faster. The available possible operations are however not as versatile as with a classical computer. However, quantum annealers (QAs) is well suited to evaluate the minimum state of a Hamiltonian quadratic potential. Therefore, we reformulate the elasto-plastic finite element problem as a double minimisation process framed at the structural scale using the variational updates formulation. In order to comply with the expected quadratic nature of the Hamiltonian, the resulting non-linear minimisation problems are iteratively solved with the suggested Quantum Annealing-assisted Sequential Quadratic Programming (QA-SQP): a sequence of minimising quadratic problems is performed by approximating the objective function by a quadratic Taylor's series. Each quadratic minimisation problem of continuous variables is then transformed into a binary quadratic problem. This binary quadratic minimisation problem can be solved on quantum annealing hardware such as the D-Wave system. The applicability of the proposed framework is demonstrated with one and two-dimensional elasto-plastic numerical benchmarks. The current work provides a pathway of performing general non-linear finite element simulations assisted by quantum computing.
Ultima: Robust and Tail-Optimal AllReduce for Distributed Deep Learning in the Cloud
Abstract
We present Ultima, a new collective-communication system for the cloud with bounded, predictable completion times for deep-learning jobs in the presence of varying computation (stragglers) and communication (congestion and gradient drops) variabilities. Ultima exploits the inherent resiliency and the stochastic nature of distributed deep-learning (DDL) training to work with approximated gradients, and provides an efficient balance between (tail) performance and the resulting accuracy of the trained models. Exploiting this domain-specific characteristic of DDL, Ultima introduces (1) mechanisms (e.g., Transpose AllReduce, unreliable connection-oriented transport, and adaptive timeout) to improve the DDL jobs' tail execution time, and (2) strategies (e.g., Hadamard Transform) to mitigate the impact of gradient drops on model accuracy. Our evaluation shows that Ultima achieves 60% faster time-to-accuracy (TTA), on average, when operating in shared environments (e.g., public cloud), and is on par with existing algorithms (e.g., Ring-AllReduce) in dedicated environments (like HPC).
SAGE-ICP: Semantic Information-Assisted ICP
Authors: Jiaming Cui, Jiming Chen, Liang Li
Subjects: Robotics (cs.RO); Computer Vision and Pattern Recognition (cs.CV)
Abstract
Robust and accurate pose estimation in unknown environments is an essential part of robotic applications. We focus on LiDAR-based point-to-point ICP combined with effective semantic information. This paper proposes a novel semantic information-assisted ICP method named SAGE-ICP, which leverages semantics in odometry. The semantic information for the whole scan is timely and efficiently extracted by a 3D convolution network, and these point-wise labels are deeply involved in every part of the registration, including semantic voxel downsampling, data association, adaptive local map, and dynamic vehicle removal. Unlike previous semantic-aided approaches, the proposed method can improve localization accuracy in large-scale scenes even if the semantic information has certain errors. Experimental evaluations on KITTI and KITTI-360 show that our method outperforms the baseline methods, and improves accuracy while maintaining real-time performance, i.e., runs faster than the sensor frame rate.
GraphControl: Adding Conditional Control to Universal Graph Pre-trained Models for Graph Domain Transfer Learning
Abstract
Graph-structured data is ubiquitous in the world which models complex relationships between objects, enabling various Web applications. Daily influxes of unlabeled graph data on the Web offer immense potential for these applications. Graph self-supervised algorithms have achieved significant success in acquiring generic knowledge from abundant unlabeled graph data. These pre-trained models can be applied to various downstream Web applications, saving training time and improving downstream (target) performance. However, different graphs, even across seemingly similar domains, can differ significantly in terms of attribute semantics, posing difficulties, if not infeasibility, for transferring the pre-trained models to downstream tasks. Concretely speaking, for example, the additional task-specific node information in downstream tasks (specificity) is usually deliberately omitted so that the pre-trained representation (transferability) can be leveraged. The trade-off as such is termed as "transferability-specificity dilemma" in this work. To address this challenge, we introduce an innovative deployment module coined as GraphControl, motivated by ControlNet, to realize better graph domain transfer learning. Specifically, by leveraging universal structural pre-trained models and GraphControl, we align the input space across various graphs and incorporate unique characteristics of target data as conditional inputs. These conditions will be progressively integrated into the model during fine-tuning or prompt tuning through ControlNet, facilitating personalized deployment. Extensive experiments show that our method significantly enhances the adaptability of pre-trained models on target attributed datasets, achieving 1.4-3x performance gain. Furthermore, it outperforms training-from-scratch methods on target data with a comparable margin and exhibits faster convergence.
Automatic Control of Reactive Brain Computer Interfaces
Authors: Pex Tufvesson, Frida Heskebeck
Subjects: Systems and Control (eess.SY); Human-Computer Interaction (cs.HC)
Abstract
This article discusses practical and theoretical aspects of real-time brain computer interface control methods based on Bayesian statistics. We investigate and improve the performance of automatic control and feedback algorithms of a reactive brain computer interface based on a visual oddball paradigm for faster statistical convergence. We introduce transfer learning using Gaussian mixture models, enabling a ready-to-use setup.
Building hierarchies of semiclassical Jacobi polynomials for spectral methods in annuli
Authors: Ioannis P. A. Papadopoulos, Timon S. Gutleb, Richard M. Slevinsky, Sheehan Olver
Abstract
We discuss computing with hierarchies of families of (potentially weighted) semiclassical Jacobi polynomials which arise in the construction of multivariate orthogonal polynomials. In particular, we outline how to build connection and differentiation matrices with optimal complexity and compute analysis and synthesis operations in quasi-optimal complexity. We investigate a particular application of these results to constructing orthogonal polynomials in annuli, called the generalised Zernike annular polynomials, which lead to sparse discretisations of partial differential equations. We compare against a scaled-and-shifted Chebyshev--Fourier series showing that in general the annular polynomials converge faster when approximating smooth functions and have better conditioning. We also construct a sparse spectral element method by combining disk and annulus cells, which is highly effective for solving PDEs with radially discontinuous variable coefficients and data.
Approximating Subset Sum Ratio faster than Subset Sum
Abstract
Subset Sum Ratio is the following optimization problem: Given a set of $n$ positive numbers $I$, find disjoint subsets $X,Y \subseteq I$ minimizing the ratio $\max{\Sigma(X)/\Sigma(Y),\Sigma(Y)/\Sigma(X)}$, where $\Sigma(Z)$ denotes the sum of all elements of $Z$. Subset Sum Ratio is an optimization variant of the Equal Subset Sum problem. It was introduced by Woeginger and Yu in '92 and is known to admit an FPTAS [Bazgan, Santha, Tuza '98]. The best approximation schemes before this work had running time $O(n^4/\varepsilon)$ [Melissinos, Pagourtzis '18], $\tilde O(n^{2.3}/\varepsilon^{2.6})$ and $\tilde O(n^2/\varepsilon^3)$ [Alonistiotis et al. '22]. In this work, we present an improved approximation scheme for Subset Sum Ratio running in time $O(n / \varepsilon^{0.9386})$. Here we assume that the items are given in sorted order, otherwise we need an additional running time of $O(n \log n)$ for sorting. Our improved running time simultaneously improves the dependence on $n$ to linear and the dependence on $1/\varepsilon$ to sublinear. For comparison, the related Subset Sum problem admits an approximation scheme running in time $O(n/\varepsilon)$ [Gens, Levner '79]. If one would achieve an approximation scheme with running time $\tilde O(n / \varepsilon^{0.99})$ for Subset Sum, then one would falsify the Strong Exponential Time Hypothesis [Abboud, Bringmann, Hermelin, Shabtay '19] as well as the Min-Plus-Convolution Hypothesis [Bringmann, Nakos '21]. We thus establish that Subset Sum Ratio admits faster approximation schemes than Subset Sum. This comes as a surprise, since at any point in time before this work the best known approximation scheme for Subset Sum Ratio had a worse running time than the best known approximation scheme for Subset Sum.
Keyword: mobile
Extended Reality via Cooperative NOMA in Hybrid Cloud/Mobile-Edge Computing Networks
Abstract
Extended reality (XR) applications often perform resource-intensive tasks, which are computed remotely, a process that prioritizes the latency criticality aspect. To this end, this paper shows that through leveraging the power of the central cloud (CC), the close proximity of edge computers (ECs), and the flexibility of uncrewed aerial vehicles (UAVs), a UAV-aided hybrid cloud/mobile-edge computing architecture promises to handle the intricate requirements of future XR applications. In this context, this paper distinguishes between two types of XR devices, namely, strong and weak devices. The paper then introduces a cooperative non-orthogonal multiple access (Co-NOMA) scheme, pairing strong and weak devices, so as to aid the XR devices quality-of-user experience by intelligently selecting either the direct or the relay links toward the weak XR devices. A sum logarithmic-rate maximization problem is, thus, formulated so as to jointly determine the computation and communication resources, and link-selection strategy as a means to strike a trade-off between the system throughput and fairness. Subject to realistic network constraints, e.g., power consumption and delay, the optimization problem is then solved iteratively via discrete relaxations, successive-convex approximation, and fractional programming, an approach which can be implemented in a distributed fashion across the network. Simulation results validate the proposed algorithms performance in terms of log-rate maximization, delay-sensitivity, scalability, and runtime performance. The practical distributed Co-NOMA implementation is particularly shown to offer appreciable benefits over traditional multiple access and NOMA methods, highlighting its applicability in decentralized XR systems.
CarDS-Plus ECG Platform: Development and Feasibility Evaluation of a Multiplatform Artificial Intelligence Toolkit for Portable and Wearable Device Electrocardiograms
Authors: Sumukh Vasisht Shankar, Evangelos K Oikonomou, Rohan Khera
Subjects: Machine Learning (cs.LG); Signal Processing (eess.SP)
Abstract
In the rapidly evolving landscape of modern healthcare, the integration of wearable & portable technology provides a unique opportunity for personalized health monitoring in the community. Devices like the Apple Watch, FitBit, and AliveCor KardiaMobile have revolutionized the acquisition and processing of intricate health data streams. Amidst the variety of data collected by these gadgets, single-lead electrocardiogram (ECG) recordings have emerged as a crucial source of information for monitoring cardiovascular health. There has been significant advances in artificial intelligence capable of interpreting these 1-lead ECGs, facilitating clinical diagnosis as well as the detection of rare cardiac disorders. This design study describes the development of an innovative multiplatform system aimed at the rapid deployment of AI-based ECG solutions for clinical investigation & care delivery. The study examines design considerations, aligning them with specific applications, develops data flows to maximize efficiency for research & clinical use. This process encompasses the reception of single-lead ECGs from diverse wearable devices, channeling this data into a centralized data lake & facilitating real-time inference through AI models for ECG interpretation. An evaluation of the platform demonstrates a mean duration from acquisition to reporting of results of 33.0 to 35.7 seconds, after a standard 30 second acquisition. There were no substantial differences in acquisition to reporting across two commercially available devices (Apple Watch and KardiaMobile). These results demonstrate the succcessful translation of design principles into a fully integrated & efficient strategy for leveraging 1-lead ECGs across platforms & interpretation by AI-ECG algorithms. Such a platform is critical to translating AI discoveries for wearable and portable ECG devices to clinical impact through rapid deployment.
Pre-Trained Masked Image Model for Mobile Robot Navigation
Abstract
2D top-down maps are commonly used for the navigation and exploration of mobile robots through unknown areas. Typically, the robot builds the navigation maps incrementally from local observations using onboard sensors. Recent works have shown that predicting the structural patterns in the environment through learning-based approaches can greatly enhance task efficiency. While many such works build task-specific networks using limited datasets, we show that the existing foundational vision networks can accomplish the same without any fine-tuning. Specifically, we use Masked Autoencoders, pre-trained on street images, to present novel applications for field-of-view expansion, single-agent topological exploration, and multi-agent exploration for indoor mapping, across different input modalities. Our work motivates the use of foundational vision models for generalized structure prediction-driven applications, especially in the dearth of training data. For more qualitative results see https://raaslab.org/projects/MIM4Robots.
Automatic Macro Mining from Interaction Traces at Scale
Authors: Forrest Huang, Gang Li, Tao Li, Yang Li
Subjects: Human-Computer Interaction (cs.HC); Computation and Language (cs.CL); Machine Learning (cs.LG)
Abstract
Macros are building block tasks of our everyday smartphone activity (e.g., "login", or "booking a flight"). Effectively extracting macros is important for understanding mobile interaction and enabling task automation. These macros are however difficult to extract at scale as they can be comprised of multiple steps yet hidden within programmatic components of the app. In this paper, we introduce a novel approach based on Large Language Models (LLMs) to automatically extract semantically meaningful macros from both random and user-curated mobile interaction traces. The macros produced by our approach are automatically tagged with natural language descriptions and are fully executable. To examine the quality of extraction, we conduct multiple studies, including user evaluation, comparative analysis against human-curated tasks, and automatic execution of these macros. These experiments and analyses show the effectiveness of our approach and the usefulness of extracted macros in various downstream applications.
Rate Adaptation Aware Positioning for Flying Gateways using Reinforcement Learning
Abstract
With the growing connectivity demands, Unmanned Aerial Vehicles (UAVs) have emerged as a prominent component in the deployment of Next Generation On-demand Wireless Networks. However, current UAV positioning solutions typically neglect the impact of Rate Adaptation (RA) algorithms or simplify its effect by considering ideal and non-implementable RA algorithms. This work proposes the Rate Adaptation aware RL-based Flying Gateway Positioning (RARL) algorithm, a positioning method for Flying Gateways that applies Deep Q-Learning, accounting for the dynamic data rate imposed by the underlying RA algorithm. The RARL algorithm aims to maximize the throughput of the flying wireless links serving one or more Flying Access Points, which in turn serve ground terminals. The performance evaluation of the RARL algorithm demonstrates that it is capable of taking into account the effect of the underlying RA algorithm and achieve the maximum throughput in all analysed static and mobile scenarios.
Secure Decentralized Learning with Blockchain
Authors: Xiaoxue Zhang, Yifan Hua, Chen Qian
Subjects: Cryptography and Security (cs.CR); Machine Learning (cs.LG)
Abstract
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices, which preserves data privacy and optimizes communication efficiency. To avoid the single point of failure problem in FL, decentralized federated learning (DFL) has been proposed to use peer-to-peer communication for model aggregation, which has been considered an attractive solution for machine learning tasks on distributed personal devices. However, this process is vulnerable to attackers who share false models and data. If there exists a group of malicious clients, they might harm the performance of the model by carrying out a poisoning attack. In addition, in DFL, clients often lack the incentives to contribute their computing powers to do model training. In this paper, we proposed Blockchain-based Decentralized Federated Learning (BDFL), which leverages a blockchain for decentralized model verification and auditing. BDFL includes an auditor committee for model verification, an incentive mechanism to encourage the participation of clients, a reputation model to evaluate the trustworthiness of clients, and a protocol suite for dynamic network updates. Evaluation results show that, with the reputation mechanism, BDFL achieves fast model convergence and high accuracy on real datasets even if there exist 30\% malicious clients in the system.
CrashTranslator: Automatically Reproducing Mobile Application Crashes Directly from Stack Trace
Authors: Yuchao Huang, Junjie Wang, Zhe Liu, Yawen Wang, Song Wang, Chunyang Chen, Yuanzhe Hu, Qing Wang
Abstract
Crash reports are vital for software maintenance since they allow the developers to be informed of the problems encountered in the mobile application. Before fixing, developers need to reproduce the crash, which is an extremely time-consuming and tedious task. Existing studies conducted the automatic crash reproduction with the natural language described reproducing steps. Yet we find a non-neglectable portion of crash reports only contain the stack trace when the crash occurs. Such stack-trace-only crashes merely reveal the last GUI page when the crash occurs, and lack step-by-step guidance. Developers tend to spend more effort in understanding the problem and reproducing the crash, and existing techniques cannot work on this, thus calling for a greater need for automatic support. This paper proposes an approach named CrashTranslator to automatically reproduce mobile application crashes directly from the stack trace. It accomplishes this by leveraging a pre-trained Large Language Model to predict the exploration steps for triggering the crash, and designing a reinforcement learning based technique to mitigate the inaccurate prediction and guide the search holistically. We evaluate CrashTranslator on 75 crash reports involving 58 popular Android apps, and it successfully reproduces 61.3% of the crashes, outperforming the state-of-the-art baselines by 109% to 206%. Besides, the average reproducing time is 68.7 seconds, outperforming the baselines by 302% to 1611%. We also evaluate the usefulness of CrashTranslator with promising results.
Integrated Sensing and Communication enabled Multiple Base Stations Cooperative Sensing Towards 6G
Abstract
Driven by the intelligent applications of sixth-generation (6G) mobile communication systems such as smart city and autonomous driving, which connect the physical and cyber space, the integrated sensing and communication (ISAC) brings a revolutionary change to the base stations (BSs) of 6G by integrating radar sensing and communication in the same hardware and wireless resource. However, with the requirements of long-range and accurate sensing in the applications of smart city and autonomous driving, the ISAC enabled single BS still has a limitation in the sensing range and accuracy. With the networked infrastructures of mobile communication systems, multi-BS cooperative sensing is a natural choice satisfying the requirement of long-range and accurate sensing. In this article, the framework of multi-BS cooperative sensing is proposed, breaking through the limitation of single-BS sensing. The enabling technologies, including unified ISAC performance metrics, ISAC signal design and optimization, interference management, cooperative sensing algorithms, are introduced in details. The performance evaluation results are provided to verify the effectiveness of multi-BS cooperative sensing schemes. With ISAC enabled multi-BS cooperative sensing (ISAC-MCS), the intelligent infrastructures connecting physical and cyber space can be established, ushering the era of 6G promoting the intelligence of everything.
Textiverse: A Scalable Visual Analytics System for Exploring Geotagged and Timestamped Text Corpora
Abstract
We propose Textiverse, a big data approach for mining geotagged timestamped textual data on a map, such as for Twitter feeds, crime reports, or restaurant reviews. We use a scalable data management pipeline that extracts keyphrases from online databases in parallel. We speed up this time-consuming step so that it outpaces the content creation rate of popular social media. The result is presented in a web-based interface that integrates with Google Maps to visualize textual content of massive scale. The visual design is based on aggregating spatial regions into discrete sites and rendering each such site as a circular tag cloud. To demonstrate the intended use of our technique, we first show how it can be used to characterize the U.S.\ National Science Foundation funding status based on all 489,151 awards. We then apply the same technique on visually representing a more spatially scattered and linguistically informal dataset: 1.2 million Twitter posts about the Android mobile operating system.
Integrated Sensing and Communication Neighbor Discovery for MANET with Gossip Mechanism
Abstract
Mobile Ad hoc Network (MANET), supporting Machine-Type Communication(MTC), has a strong demand for rapid networking. Neighbor Discovery (ND) is a key initial step in configuring MANETs and faces a serious challenge in decreasing convergence time. Integrated Sensing and Communication (ISAC), as one of the potential key technologies in the 6th Generation (6G) mobile networks, can obtain the sensing data as the priori information to accelerate ND convergence. In order to further reduce the convergence time of ND, this paper introduces the ISAC-enabled gossip mechanism into the ND algorithm. The prior information acquired by ISAC reduces the information redundancy brought by the gossip mechanism and thus decreases the probability of collision, which further improves convergence speed. The average number of discovered nodes within a given period is derived, which is applied as the critical metric to evaluate the performance of ND algorithms. The simulation results confirm the correctness of the theoretical derivation and show that the interplay between the prior mechanisms and the gossip mechanism significantly reduces the convergence time. In addition, to solve the problem of imperfect sensing information, reinforcement learning is applied. Under the constraints of the convergence condition, the non-Reply and non-Stop Algorithm based on Gossip and Q-learning (GQ-nRnS) proposed in this paper not only ensures the completeness of ND, but also maintains a high convergence speed of ND. Compared with the Q-learning-based ND algorithm (Q-ND), the average convergence time of the GQ-nRnS algorithm is reduced by about 66.4%.
PoRF: Pose Residual Field for Accurate Neural Surface Reconstruction
Authors: Jia-Wang Bian, Wenjing Bian, Victor Adrian Prisacariu, Philip Torr
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Neural surface reconstruction is sensitive to the camera pose noise, even if state-of-the-art pose estimators like COLMAP or ARKit are used. More importantly, existing Pose-NeRF joint optimisation methods have struggled to improve pose accuracy in challenging real-world scenarios. To overcome the challenges, we introduce the pose residual field (\textbf{PoRF}), a novel implicit representation that uses an MLP for regressing pose updates. This is more robust than the conventional pose parameter optimisation due to parameter sharing that leverages global information over the entire sequence. Furthermore, we propose an epipolar geometry loss to enhance the supervision that leverages the correspondences exported from COLMAP results without the extra computational overhead. Our method yields promising results. On the DTU dataset, we reduce the rotation error by 78\% for COLMAP poses, leading to the decreased reconstruction Chamfer distance from 3.48mm to 0.85mm. On the MobileBrick dataset that contains casually captured unbounded 360-degree videos, our method refines ARKit poses and improves the reconstruction F1 score from 69.18 to 75.67, outperforming that with the dataset provided ground-truth pose (75.14). These achievements demonstrate the efficacy of our approach in refining camera poses and improving the accuracy of neural surface reconstruction in real-world scenarios.
S4C: Self-Supervised Semantic Scene Completion with Neural Fields
Authors: Adrian Hayler, Felix Wimbauer, Dominik Muhle, Christian Rupprecht, Daniel Cremers
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
3D semantic scene understanding is a fundamental challenge in computer vision. It enables mobile agents to autonomously plan and navigate arbitrary environments. SSC formalizes this challenge as jointly estimating dense geometry and semantic information from sparse observations of a scene. Current methods for SSC are generally trained on 3D ground truth based on aggregated LiDAR scans. This process relies on special sensors and annotation by hand which are costly and do not scale well. To overcome this issue, our work presents the first self-supervised approach to SSC called S4C that does not rely on 3D ground truth data. Our proposed method can reconstruct a scene from a single image and only relies on videos and pseudo segmentation ground truth generated from off-the-shelf image segmentation network during training. Unlike existing methods, which use discrete voxel grids, we represent scenes as implicit semantic fields. This formulation allows querying any point within the camera frustum for occupancy and semantic class. Our architecture is trained through rendering-based self-supervised losses. Nonetheless, our method achieves performance close to fully supervised state-of-the-art methods. Additionally, our method demonstrates strong generalization capabilities and can synthesize accurate segmentation maps for far away viewpoints.
AG-CVG: Coverage Planning with a Mobile Recharging UGV and an Energy-Constrained UAV
Abstract
In this paper, we present an approach for coverage path planning for a team of an energy-constrained Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV). Both the UAV and the UGV have predefined areas that they have to cover. The goal is to perform complete coverage by both robots while minimizing the coverage time. The UGV can also serve as a mobile recharging station. The UAV and UGV need to occasionally rendezvous for recharging. We propose a heuristic method to address this NP-Hard planning problem. Our approach involves initially determining coverage paths without factoring in energy constraints. Subsequently, we cluster segments of these paths and employ graph matching to assign UAV clusters to UGV clusters for efficient recharging management. We perform numerical analysis on real-world coverage applications and show that compared with a greedy approach our method reduces rendezvous overhead on average by 11.33\%. We demonstrate proof-of-concept with a team of a VOXL m500 drone and a Clearpath Jackal ground vehicle, providing a complete system from the offline algorithm to the field execution.
MatFormer: Nested Transformer for Elastic Inference
Abstract
Transformer models are deployed in a wide range of settings, from multi-accelerator clusters to standalone mobile phones. The diverse inference constraints in these scenarios necessitate practitioners to train foundation models such as PaLM 2, Llama, & ViTs as a series of models of varying sizes. Due to significant training costs, only a select few model sizes are trained and supported, limiting more fine-grained control over relevant tradeoffs, including latency, cost, and accuracy. This work introduces MatFormer, a nested Transformer architecture designed to offer elasticity in a variety of deployment constraints. Each Feed Forward Network (FFN) block of a MatFormer model is jointly optimized with a few nested smaller FFN blocks. This training procedure allows for the Mix'n'Match of model granularities across layers -- i.e., a trained universal MatFormer model enables extraction of hundreds of accurate smaller models, which were never explicitly optimized. We empirically demonstrate MatFormer's effectiveness across different model classes (decoders & encoders), modalities (language & vision), and scales (up to 2.6B parameters). We find that a 2.6B decoder-only MatFormer language model (MatLM) allows us to extract smaller models spanning from 1.5B to 2.6B, each exhibiting comparable validation loss and one-shot downstream evaluations to their independently trained counterparts. Furthermore, we observe that smaller encoders extracted from a universal MatFormer-based ViT (MatViT) encoder preserve the metric-space structure for adaptive large-scale retrieval. Finally, we showcase that speculative decoding with the accurate and consistent submodels extracted from MatFormer can further reduce inference latency.
Keyword: pruning
SparseCoder: Advancing Source Code Analysis with Sparse Attention and Learned Token Pruning
Authors: Xueqi Yang, Mariusz Jakubowski, Kelly Kang, Haojie Yu, Tim Menzies
Abstract
As software projects rapidly evolve, software artifacts become more complex and defects behind get harder to identify. The emerging Transformer-based approaches, though achieving remarkable performance, struggle with long code sequences due to their self-attention mechanism, which scales quadratically with the sequence length. This paper introduces SparseCoder, an innovative approach incorporating sparse attention and learned token pruning (LTP) method (adapted from natural language processing) to address this limitation. Extensive experiments carried out on a large-scale dataset for vulnerability detection demonstrate the effectiveness and efficiency of SparseCoder, scaling from quadratically to linearly on long code sequence analysis in comparison to CodeBERT and RoBERTa. We further achieve 50% FLOPs reduction with a negligible performance drop of less than 1% comparing to Transformer leveraging sparse attention. Moverover, SparseCoder goes beyond making "black-box" decisions by elucidating the rationale behind those decisions. Code segments that contribute to the final decision can be highlighted with importance scores, offering an interpretable, transparent analysis tool for the software engineering landscape.
Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Abstract
Given a real-world dataset, data condensation (DC) aims to synthesize a significantly smaller dataset that captures the knowledge of this dataset for model training with high performance. Recent works propose to enhance DC with data parameterization, which condenses data into parameterized data containers rather than pixel space. The intuition behind data parameterization is to encode shared features of images to avoid additional storage costs. In this paper, we recognize that images share common features in a hierarchical way due to the inherent hierarchical structure of the classification system, which is overlooked by current data parameterization methods. To better align DC with this hierarchical nature and encourage more efficient information sharing inside data containers, we propose a novel data parameterization architecture, Hierarchical Memory Network (HMN). HMN stores condensed data in a three-tier structure, representing the dataset-level, class-level, and instance-level features. Another helpful property of the hierarchical architecture is that HMN naturally ensures good independence among images despite achieving information sharing. This enables instance-level pruning for HMN to reduce redundant information, thereby further minimizing redundancy and enhancing performance. We evaluate HMN on four public datasets (SVHN, CIFAR10, CIFAR100, and Tiny-ImageNet) and compare HMN with eight DC baselines. The evaluation results show that our proposed method outperforms all baselines, even when trained with a batch-based loss consuming less GPU memory.
Keyword: diffusion
Monsters in the Dark: Sanitizing Hidden Threats with Diffusion Models
Authors: Preston K. Robinette, Daniel Moyer, Taylor T. Johnson
Abstract
Steganography is the art of hiding information in plain sight. This form of covert communication can be used by bad actors to propagate malware, exfiltrate victim data, and communicate with other bad actors. Current image steganography defenses rely upon steganalysis, or the detection of hidden messages. These methods, however, are non-blind as they require information about known steganography techniques and are easily bypassed. Recent work has instead focused on a defense mechanism known as sanitization, which eliminates hidden information from images. In this work, we introduce a novel blind deep learning steganography sanitization method that utilizes a diffusion model framework to sanitize universal and dependent steganography (DM-SUDS), which both sanitizes and preserves image quality. We evaluate this approach against state-of-the-art deep learning sanitization frameworks and provide further detailed analysis through an ablation study. DM-SUDS outperforms previous sanitization methods and improves image preservation MSE by 71.32%, PSNR by 22.43% and SSIM by 17.30%. This is the first blind deep learning image sanitization framework to meet these image quality results.
ObjectComposer: Consistent Generation of Multiple Objects Without Fine-tuning
Authors: Alec Helbling, Evan Montoya, Duen Horng Chau
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
Recent text-to-image generative models can generate high-fidelity images from text prompts. However, these models struggle to consistently generate the same objects in different contexts with the same appearance. Consistent object generation is important to many downstream tasks like generating comic book illustrations with consistent characters and setting. Numerous approaches attempt to solve this problem by extending the vocabulary of diffusion models through fine-tuning. However, even lightweight fine-tuning approaches can be prohibitively expensive to run at scale and in real-time. We introduce a method called ObjectComposer for generating compositions of multiple objects that resemble user-specified images. Our approach is training-free, leveraging the abilities of preexisting models. We build upon the recent BLIP-Diffusion model, which can generate images of single objects specified by reference images. ObjectComposer enables the consistent generation of compositions containing multiple specific objects simultaneously, all without modifying the weights of the underlying models.
Investigating the Adversarial Robustness of Density Estimation Using the Probability Flow ODE
Authors: Marius Arvinte, Cory Cornelius, Jason Martin, Nageen Himayat
Abstract
Beyond their impressive sampling capabilities, score-based diffusion models offer a powerful analysis tool in the form of unbiased density estimation of a query sample under the training data distribution. In this work, we investigate the robustness of density estimation using the probability flow (PF) neural ordinary differential equation (ODE) model against gradient-based likelihood maximization attacks and the relation to sample complexity, where the compressed size of a sample is used as a measure of its complexity. We introduce and evaluate six gradient-based log-likelihood maximization attacks, including a novel reverse integration attack. Our experimental evaluations on CIFAR-10 show that density estimation using the PF ODE is robust against high-complexity, high-likelihood attacks, and that in some cases adversarial samples are semantically meaningful, as expected from a robust estimator.
A new mixed finite element method for arbitrary element pair for a quasi-static nonlinear permeability thermo-poroelasticity model
Abstract
In this paper, we develop a multiphysics finite element method for solving the quasi-static thermo-poroelasticity model with nonlinear permeability. The model involves multiple physical processes such as deformation, pressure, diffusion and heat transfer. To reveal the multi-physical processes of deformation, diffusion and heat transfer, we reformulate the original model into a fluid coupled problem that is general Stokes equation coupled with two reaction-diffusion equations. Then, we prove the existence and uniqueness of weak solution for the original problem by the $B$-operator technique and by sequence approximation for the reformulated problem. As for the reformulated problem we propose a fully discrete finite element method which can use arbitrary finite element pairs to solve the displacement $\bu$ pressure $\tau $ and variable $\varpi,\varsigma$, and the backward Euler method for time discretization. Finally, we give the stability analysis of the above proposed method, also we prove that the fully discrete multiphysics finite element method has an optimal convergence order. Numerical experiments show that the proposed method can achieve good results under different finite element pairs and are consistent with the theoretical analysis.
Denoising Task Routing for Diffusion Models
Authors: Byeongjun Park, Sangmin Woo, Hyojun Go, Jin-Young Kim, Changick Kim
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
Abstract
Diffusion models generate highly realistic images through learning a multi-step denoising process, naturally embodying the principles of multi-task learning (MTL). Despite the inherent connection between diffusion models and MTL, there remains an unexplored area in designing neural architectures that explicitly incorporate MTL into the framework of diffusion models. In this paper, we present Denoising Task Routing (DTR), a simple add-on strategy for existing diffusion model architectures to establish distinct information pathways for individual tasks within a single architecture by selectively activating subsets of channels in the model. What makes DTR particularly compelling is its seamless integration of prior knowledge of denoising tasks into the framework: (1) Task Affinity: DTR activates similar channels for tasks at adjacent timesteps and shifts activated channels as sliding windows through timesteps, capitalizing on the inherent strong affinity between tasks at adjacent timesteps. (2) Task Weights: During the early stages (higher timesteps) of the denoising process, DTR assigns a greater number of task-specific channels, leveraging the insight that diffusion models prioritize reconstructing global structure and perceptually rich contents in earlier stages, and focus on simple noise removal in later stages. Our experiments demonstrate that DTR consistently enhances the performance of diffusion models across various evaluation protocols, all without introducing additional parameters. Furthermore, DTR contributes to accelerating convergence during training. Finally, we show the complementarity between our architectural approach and existing MTL optimization techniques, providing a more complete view of MTL within the context of diffusion training.
Imitation Learning from Purified Demonstration
Authors: Yunke Wang, Minjing Dong, Bo Du, Chang Xu
Abstract
Imitation learning has emerged as a promising approach for addressing sequential decision-making problems, with the assumption that expert demonstrations are optimal. However, in real-world scenarios, expert demonstrations are often imperfect, leading to challenges in effectively applying imitation learning. While existing research has focused on optimizing with imperfect demonstrations, the training typically requires a certain proportion of optimal demonstrations to guarantee performance. To tackle these problems, we propose to purify the potential perturbations in imperfect demonstrations and subsequently conduct imitation learning from purified demonstrations. Motivated by the success of diffusion models, we introduce a two-step purification via the diffusion process. In the first step, we apply a forward diffusion process to effectively smooth out the potential perturbations in imperfect demonstrations by introducing additional noise. Subsequently, a reverse generative process is utilized to recover the optimal expert demonstrations from the diffused ones. We provide theoretical evidence supporting our approach, demonstrating that total variance distance between the purified and optimal demonstration distributions can be upper-bounded. The evaluation results on MuJoCo demonstrate the effectiveness of our method from different aspects.
State of the Art on Diffusion Models for Visual Computing
Authors: Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T. Barron, Amit H. Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, C. Karen Liu, Lingjie Liu, Ben Mildenhall, Matthias Nießner, Björn Ommer, Christian Theobalt, Peter Wonka, Gordon Wetzstein
Abstract
The field of visual computing is rapidly advancing due to the emergence of generative artificial intelligence (AI), which unlocks unprecedented capabilities for the generation, editing, and reconstruction of images, videos, and 3D scenes. In these domains, diffusion models are the generative AI architecture of choice. Within the last year alone, the literature on diffusion-based tools and applications has seen exponential growth and relevant papers are published across the computer graphics, computer vision, and AI communities with new works appearing daily on arXiv. This rapid growth of the field makes it difficult to keep up with all recent developments. The goal of this state-of-the-art report (STAR) is to introduce the basic mathematical concepts of diffusion models, implementation details and design choices of the popular Stable Diffusion model, as well as overview important aspects of these generative AI tools, including personalization, conditioning, inversion, among others. Moreover, we give a comprehensive overview of the rapidly growing literature on diffusion-based generation and editing, categorized by the type of generated medium, including 2D images, videos, 3D objects, locomotion, and 4D scenes. Finally, we discuss available datasets, metrics, open challenges, and social implications. This STAR provides an intuitive starting point to explore this exciting topic for researchers, artists, and practitioners alike.
Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes
Abstract
Learning the distribution of data on Riemannian manifolds is crucial for modeling data from non-Euclidean space, which is required by many applications from diverse scientific fields. Yet, existing generative models on manifolds suffer from expensive divergence computation or rely on approximations of heat kernel. These limitations restrict their applicability to simple geometries and hinder scalability to high dimensions. In this work, we introduce the Riemannian Diffusion Mixture, a principled framework for building a generative process on manifolds as a mixture of endpoint-conditioned diffusion processes instead of relying on the denoising approach of previous diffusion models, for which the generative process is characterized by its drift guiding toward the most probable endpoint with respect to the geometry of the manifold. We further propose a simple yet efficient training objective for learning the mixture process, that is readily applicable to general manifolds. Our method outperforms previous generative models on various manifolds while scaling to high dimensions and requires a dramatically reduced number of in-training simulation steps for general manifolds.
Uni-paint: A Unified Framework for Multimodal Image Inpainting with Pretrained Diffusion Model
Authors: Shiyuan Yang, Xiaodong Chen, Jing Liao
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
Recently, text-to-image denoising diffusion probabilistic models (DDPMs) have demonstrated impressive image generation capabilities and have also been successfully applied to image inpainting. However, in practice, users often require more control over the inpainting process beyond textual guidance, especially when they want to composite objects with customized appearance, color, shape, and layout. Unfortunately, existing diffusion-based inpainting methods are limited to single-modal guidance and require task-specific training, hindering their cross-modal scalability. To address these limitations, we propose Uni-paint, a unified framework for multimodal inpainting that offers various modes of guidance, including unconditional, text-driven, stroke-driven, exemplar-driven inpainting, as well as a combination of these modes. Furthermore, our Uni-paint is based on pretrained Stable Diffusion and does not require task-specific training on specific datasets, enabling few-shot generalizability to customized images. We have conducted extensive qualitative and quantitative evaluations that show our approach achieves comparable results to existing single-modal methods while offering multimodal inpainting capabilities not available in other methods. Code will be available at https://github.com/ysy31415/unipaint.
Score Regularized Policy Optimization through Diffusion Behavior
Authors: Huayu Chen, Cheng Lu, Zhengyi Wang, Hang Su, Jun Zhu
Abstract
Recent developments in offline reinforcement learning have uncovered the immense potential of diffusion modeling, which excels at representing heterogeneous behavior policies. However, sampling from diffusion policies is considerably slow because it necessitates tens to hundreds of iterative inference steps for one action. To address this issue, we propose to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behavior distribution's score function during optimization. Our method enjoys powerful generative capabilities of diffusion modeling while completely circumventing the computationally intensive and time-consuming diffusion sampling scheme, both during training and evaluation. Extensive results on D4RL tasks show that our method boosts action sampling speed by more than 25 times compared with various leading diffusion-based methods in locomotion tasks, while still maintaining state-of-the-art performance.
WiGenAI: The Symphony of Wireless and Generative AI via Diffusion Models
Authors: Mehdi Letafati, Samad Ali, Matti Latva-aho
Subjects: Information Theory (cs.IT); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Abstract
Innovative foundation models, such as GPT-3 and stable diffusion models, have made a paradigm shift in the realm of artificial intelligence (AI) towards generative AI-based systems. In unison, from data communication and networking perspective, AI and machine learning (AI/ML) algorithms are envisioned to be pervasively incorporated into the future generations of wireless communications systems, highlighting the need for novel AI-native solutions for the emergent communication scenarios. In this article, we outline the applications of generative AI in wireless communication systems to lay the foundations for research in this field. Diffusion-based generative models, as the new state-of-the-art paradigm of generative models, are introduced, and their applications in wireless communication systems are discussed. Two case studies are also presented to showcase how diffusion models can be exploited for the development of resilient AI-native communication systems. Specifically, we propose denoising diffusion probabilistic models (DDPM) for a wireless communication scheme with non-ideal transceivers, where 30% improvement is achieved in terms of bit error rate. As the second application, DDPMs are employed at the transmitter to shape the constellation symbols, highlighting a robust out-of-distribution performance. Finally, future directions and open issues for the development of generative AI-based wireless systems are discussed to promote future research endeavors towards wireless generative AI (WiGenAI).
Adaptive Distributionally Robust Planning for Renewable-Powered Fast Charging Stations Under Decision-Dependent EV Diffusion Uncertainty
Authors: Yujia Li, Feng Qiu, Chenxi Hu, Yunhe Hou
Subjects: Systems and Control (eess.SY); Optimization and Control (math.OC)
Abstract
When deploying fast charging stations (FCSs) to support long-distance trips of electric vehicles (EVs), there exist indirect network effects: while the gradual diffusion of EVs directly influences the timing and capacities of FCS allocation, the decisions for FCS allocations, in turn, impact the drivers' willingness to adopt EVs. This interplay, if neglected, can result in uncovered EVs and security issues on the grid side and even hinder the effective diffusion of EVs. In this paper, we explicitly incorporate this interdependence by quantifying EV adoption rates as decision-dependent uncertainties (DDUs) using decision-dependent ambiguity sets (DDASs). Then, a two-stage decision-dependent distributionally robust FCS planning (D$^3$R-FCSP) model is developed for adaptively deploying FCSs with on-site sources and expanding the coupled distribution network. A multi-period capacitated arc cover-path cover (MCACPC) model is incorporated to capture the EVs' recharging patterns to ensure the feasibility of FCS locations and capacities. To resolve the nonlinearity and nonconvexity, the D$^3$R-FCSP model is equivalently reformulated into a single-level mixed-integer linear programming by exploiting its strong duality and applying the McCormick envelope. Finally, case studies highlight the superior out-of-sample performances of our model in terms of security and cost-efficiency. Furthermore, the byproduct of accelerated EV adoption through an implicit positive feedback loop is highlighted.
Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing Else
Abstract
Recent advances in text-to-image diffusion models have enabled the photorealistic generation of images from text prompts. Despite the great progress, existing models still struggle to generate compositional multi-concept images naturally, limiting their ability to visualize human imagination. While several recent works have attempted to address this issue, they either introduce additional training or adopt guidance at inference time. In this work, we consider a more ambitious goal: natural multi-concept generation using a pre-trained diffusion model, and with almost no extra cost. To achieve this goal, we identify the limitations in the text embeddings used for the pre-trained text-to-image diffusion models. Specifically, we observe concept dominance and non-localized contribution that severely degrade multi-concept generation performance. We further design a minimal low-cost solution that overcomes the above issues by tweaking (not re-training) the text embeddings for more realistic multi-concept text-to-image generation. Our Correction by Similarities method tweaks the embedding of concepts by collecting semantic features from most similar tokens to localize the contribution. To avoid mixing features of concepts, we also apply Cross-Token Non-Maximum Suppression, which excludes the overlap of contributions from different concepts. Experiments show that our approach outperforms previous methods in text-to-image, image manipulation, and personalization tasks, despite not introducing additional training or inference costs to the diffusion steps.
Boosting Black-box Attack to Deep Neural Networks with Conditional Diffusion Models
Abstract
Existing black-box attacks have demonstrated promising potential in creating adversarial examples (AE) to deceive deep learning models. Most of these attacks need to handle a vast optimization space and require a large number of queries, hence exhibiting limited practical impacts in real-world scenarios. In this paper, we propose a novel black-box attack strategy, Conditional Diffusion Model Attack (CDMA), to improve the query efficiency of generating AEs under query-limited situations. The key insight of CDMA is to formulate the task of AE synthesis as a distribution transformation problem, i.e., benign examples and their corresponding AEs can be regarded as coming from two distinctive distributions and can transform from each other with a particular converter. Unlike the conventional \textit{query-and-optimization} approach, we generate eligible AEs with direct conditional transform using the aforementioned data converter, which can significantly reduce the number of queries needed. CDMA adopts the conditional Denoising Diffusion Probabilistic Model as the converter, which can learn the transformation from clean samples to AEs, and ensure the smooth development of perturbed noise resistant to various defense strategies. We demonstrate the effectiveness and efficiency of CDMA by comparing it with nine state-of-the-art black-box attacks across three benchmark datasets. On average, CDMA can reduce the query count to a handful of times; in most cases, the query count is only ONE. We also show that CDMA can obtain $>99\%$ attack success rate for untarget attacks over all datasets and targeted attack over CIFAR-10 with the noise budget of $\epsilon=16$.
Third order tensor-oriented directional splitting for exponential integrators
Abstract
Suitable discretizations through tensor product formulas of popular multidimensional operators (diffusion--advection, for instance) lead to matrices with $d$-dimensional Kronecker sum structure. For evolutionary PDEs containing such operators and integrated in time with exponential integrators, it is of paramount importance to efficiently approximate actions of $\varphi$-functions of this kind of matrices. In this work, we show how to produce directional split approximations of third order with respect to the time step size. They conveniently employ tensor-matrix products (realized with highly performance level 3 BLAS) and that allow for the effective usage in practice of exponential integrators up to order three. The approach has been successfully tested against state-of-the-art techniques on two well-known physical models, namely FitzHugh--Nagumo and Schnakenberg.
Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models
Authors: Lai Zeqiang, Zhu Xizhou, Dai Jifeng, Qiao Yu, Wang Wenhai
Abstract
The revolution of artificial intelligence content generation has been rapidly accelerated with the booming text-to-image (T2I) diffusion models. Within just two years of development, it was unprecedentedly of high-quality, diversity, and creativity that the state-of-the-art models could generate. However, a prevalent limitation persists in the effective communication with these popular T2I models, such as Stable Diffusion, using natural language descriptions. This typically makes an engaging image hard to obtain without expertise in prompt engineering with complex word compositions, magic tags, and annotations. Inspired by the recently released DALLE3 - a T2I model directly built-in ChatGPT that talks human language, we revisit the existing T2I systems endeavoring to align human intent and introduce a new task - interactive text to image (iT2I), where people can interact with LLM for interleaved high-quality image generation/edit/refinement and question answering with stronger images and text correspondences using natural language. In addressing the iT2I problem, we present a simple approach that augments LLMs for iT2I with prompting techniques and off-the-shelf T2I models. We evaluate our approach for iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT, LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a convenient and low-cost way to introduce the iT2I ability for any existing LLMs and any text-to-image models without any training while bringing little degradation on LLMs' inherent capabilities in, e.g., question answering and code generation. We hope this work could draw broader attention and provide inspiration for boosting user experience in human-machine interactions alongside the image quality of the next-generation T2I systems.
Abstract
Recent works have successfully extended large-scale text-to-image models to the video domain, producing promising results but at a high computational cost and requiring a large amount of video data. In this work, we introduce ConditionVideo, a training-free approach to text-to-video generation based on the provided condition, video, and input text, by leveraging the power of off-the-shelf text-to-image generation methods (e.g., Stable Diffusion). ConditionVideo generates realistic dynamic videos from random noise or given scene videos. Our method explicitly disentangles the motion representation into condition-guided and scenery motion components. To this end, the ConditionVideo model is designed with a UNet branch and a control branch. To improve temporal coherence, we introduce sparse bi-directional spatial-temporal attention (sBiST-Attn). The 3D control network extends the conventional 2D controlnet model, aiming to strengthen conditional generation accuracy by additionally leveraging the bi-directional frames in the temporal domain. Our method exhibits superior performance in terms of frame consistency, clip score, and conditional accuracy, outperforming other compared methods.
ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with Diffusion Models
Abstract
In this work, we investigate the capability of generating images from pre-trained diffusion models at much higher resolutions than the training image sizes. In addition, the generated images should have arbitrary image aspect ratios. When generating images directly at a higher resolution, 1024 x 1024, with the pre-trained Stable Diffusion using training images of resolution 512 x 512, we observe persistent problems of object repetition and unreasonable object structures. Existing works for higher-resolution generation, such as attention-based and joint-diffusion approaches, cannot well address these issues. As a new perspective, we examine the structural components of the U-Net in diffusion models and identify the crucial cause as the limited perception field of convolutional kernels. Based on this key observation, we propose a simple yet effective re-dilation that can dynamically adjust the convolutional perception field during inference. We further propose the dispersed convolution and noise-damped classifier-free guidance, which can enable ultra-high-resolution image generation (e.g., 4096 x 4096). Notably, our approach does not require any training or optimization. Extensive experiments demonstrate that our approach can address the repetition issue well and achieve state-of-the-art performance on higher-resolution image synthesis, especially in texture details. Our work also suggests that a pre-trained diffusion model trained on low-resolution images can be directly used for high-resolution visual generation without further tuning, which may provide insights for future research on ultra-high-resolution image and video synthesis.
Keyword: adaptive
Deep Learning-Based Real-Time Rate Control for Live Streaming on Wireless Networks
Authors: Matin Mortaheb, Mohammad A. Amir Khojastepour, Srimat T. Chakradhar, Sennur Ulukus
Subjects: Networking and Internet Architecture (cs.NI); Information Theory (cs.IT); Machine Learning (cs.LG); Signal Processing (eess.SP); Systems and Control (eess.SY)
Abstract
Providing wireless users with high-quality video content has become increasingly important. However, ensuring consistent video quality poses challenges due to variable encoded bitrate caused by dynamic video content and fluctuating channel bitrate caused by wireless fading effects. Suboptimal selection of encoder parameters can lead to video quality loss due to underutilized bandwidth or the introduction of video artifacts due to packet loss. To address this, a real-time deep learning based H.264 controller is proposed. This controller leverages instantaneous channel quality data driven from the physical layer, along with the video chunk, to dynamically estimate the optimal encoder parameters with a negligible delay in real-time. The objective is to maintain an encoded video bitrate slightly below the available channel bitrate. Experimental results, conducted on both QCIF dataset and a diverse selection of random videos from public datasets, validate the effectiveness of the approach. Remarkably, improvements of 10-20 dB in PSNR with repect to the state-of-the-art adaptive bitrate video streaming is achieved, with an average packet drop rate as low as 0.002.
SAILing CAVs: Speed-Adaptive Infrastructure-Linked Connected and Automated Vehicles
Authors: Matthew Nice, Matthew Bunting, George Gunter, William Barbour, Jonathan Sprinkle, Dan Work
Abstract
This work demonstrates a new capability in roadway control: Speed-adaptive, infrastructure-linked connected and automated vehicles. We develop and deploy a lightly modified vehicle that is able to dynamically adjust the vehicle speed in response to posted variable speed limit messages generated by the infrastructure using LTE connectivity. This work describes the open source hardware and software platform that enables integration between infrastructure-based variable posted speed limits, and existing vehicle platforms for automated control. The vehicle is deployed in heavy morning traffic on I-24 in Nashville, TN. The control vehicle follows the posted variable speed limits, resulting in as much as a 25% reduction in speed variability compared to a human-piloted vehicle in the same traffic stream.
Ultima: Robust and Tail-Optimal AllReduce for Distributed Deep Learning in the Cloud
Abstract
We present Ultima, a new collective-communication system for the cloud with bounded, predictable completion times for deep-learning jobs in the presence of varying computation (stragglers) and communication (congestion and gradient drops) variabilities. Ultima exploits the inherent resiliency and the stochastic nature of distributed deep-learning (DDL) training to work with approximated gradients, and provides an efficient balance between (tail) performance and the resulting accuracy of the trained models. Exploiting this domain-specific characteristic of DDL, Ultima introduces (1) mechanisms (e.g., Transpose AllReduce, unreliable connection-oriented transport, and adaptive timeout) to improve the DDL jobs' tail execution time, and (2) strategies (e.g., Hadamard Transform) to mitigate the impact of gradient drops on model accuracy. Our evaluation shows that Ultima achieves 60% faster time-to-accuracy (TTA), on average, when operating in shared environments (e.g., public cloud), and is on par with existing algorithms (e.g., Ring-AllReduce) in dedicated environments (like HPC).
Barrier States Theory for Safety-Critical Multi-Objective Control
Authors: Hassan Almubarak, Nader Sadegh, Evangelos A. Theodorou
Abstract
Multi-objective safety-critical control entails a diligent design to avoid possibly conflicting scenarios and ensure safety. This paper studies the concept of barrier states (BaS) for safe multi-objective controls in which the safety condition is manifested as a dynamical sub-system to be controlled along other states of the system. This allows us to introduce the idea of safety embedded systems. The proposition is that the control problem is now transformed to designing a control law for the new, unconstrained, system in which the barrier state is driven to stay bounded while achieving other performance objectives. In the stabilization case, for example, we show that designing a stabilizing controller for the safety embedded system implies guaranteed safe stabilization for the original safety-critical system. Consequently, a conflict between performance objectives and safety constraints is substantially avoided. This allows us to embrace various legacy control methods from the literature to acquire safe control laws. Moreover, we discuss how the proposed technique can be espoused for enforcing input constraints. Additionally, dealing with the constraint through a state allows us to extend various existing control approaches to the safety case. We consider the case of bounded input disturbance and adopt the notion of input-to-state stability (ISS) for barrier states to obtain the notion of input-to-state safety (ISSf) to analyze safe robustness of systems. Subsequently, we derive the notion of input-to-state safe stability (IS$^3$) and discuss the synthesis of robust safely stabilizing feedback controls through designing robust stabilizing controllers for the safety embedded systems. The proposed techniques and concepts are used in various examples including the design of proportional-integral-derivative-barrier (PIDB) control for adaptive cruise control.
A Digital Twin Approach for Adaptive Compliance in Cyber-Physical Systems: Case of Smart Warehouse Logistics
Authors: Nan Zhang, Rami Bahsoon, Nikos Tziritas, Georgios Theodoropoulos
Abstract
Engineering regulatory compliance in complex Cyber-Physical Systems (CPS), such as smart warehouse logistics, is challenging due to the open and dynamic nature of these systems, scales, and unpredictable modes of human-robot interactions that can be best learnt at runtime. Traditional offline approaches for engineering compliance often involve modelling at a higher, more abstract level (e.g. using languages like SysML). These abstract models only support analysis in offline-designed and simplified scenarios. However, open and complex systems may be unpredictable, and their behaviours are difficult to be fully captured by abstract models. These systems may also involve other business goals, possibly conflicting with regulatory compliance. To overcome these challenges, fine-grained simulation models are promising to complement abstract models and support accurate runtime predictions and performance evaluation with trade-off analysis. The novel contribution of this work is a Digital Twin-oriented architecture for adaptive compliance leveraging abstract goal modelling, fine-grained agent-based modelling and runtime simulation for managing compliance trade-offs. A case study from smart warehouse logistics is used to demonstrate the approach considering safety and productivity trade-offs.
Off-Policy Evaluation for Human Feedback
Authors: Qitong Gao, Juncheng Dong, Vahid Tarokh, Min Chi, Miroslav Pajic
Abstract
Off-policy evaluation (OPE) is important for closing the gap between offline training and evaluation of reinforcement learning (RL), by estimating performance and/or rank of target (evaluation) policies using offline trajectories only. It can improve the safety and efficiency of data collection and policy testing procedures in situations where online deployments are expensive, such as healthcare. However, existing OPE methods fall short in estimating human feedback (HF) signals, as HF may be conditioned over multiple underlying factors and is only sparsely available; as opposed to the agent-defined environmental rewards (used in policy optimization), which are usually determined over parametric functions or distributions. Consequently, the nature of HF signals makes extrapolating accurate OPE estimations to be challenging. To resolve this, we introduce an OPE for HF (OPEHF) framework that revives existing OPE methods in order to accurately evaluate the HF signals. Specifically, we develop an immediate human reward (IHR) reconstruction approach, regularized by environmental knowledge distilled in a latent space that captures the underlying dynamics of state transitions as well as issuing HF signals. Our approach has been tested over two real-world experiments, adaptive in-vivo neurostimulation and intelligent tutoring, as well as in a simulation environment (visual Q&A). Results show that our approach significantly improves the performance toward estimating HF signals accurately, compared to directly applying (variants of) existing OPE methods.
Adaptive Gating in Mixture-of-Experts based Language Models
Authors: Jiamin Li, Qiang Su, Yitao Yang, Yimin Jiang, Cong Wang, Hong Xu
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Abstract
Large language models, such as OpenAI's ChatGPT, have demonstrated exceptional language understanding capabilities in various NLP tasks. Sparsely activated mixture-of-experts (MoE) has emerged as a promising solution for scaling models while maintaining a constant number of computational operations. Existing MoE model adopts a fixed gating network where each token is computed by the same number of experts. However, this approach contradicts our intuition that the tokens in each sequence vary in terms of their linguistic complexity and, consequently, require different computational costs. Little is discussed in prior research on the trade-off between computation per token and model performance. This paper introduces adaptive gating in MoE, a flexible training strategy that allows tokens to be processed by a variable number of experts based on expert probability distribution. The proposed framework preserves sparsity while improving training efficiency. Additionally, curriculum learning is leveraged to further reduce training time. Extensive experiments on diverse NLP tasks show that adaptive gating reduces at most 22.5% training time while maintaining inference quality. Moreover, we conduct a comprehensive analysis of the routing decisions and present our insights when adaptive gating is used.
Multi-Task Learning-Enabled Automatic Vessel Draft Reading for Intelligent Maritime Surveillance
Abstract
The accurate and efficient vessel draft reading (VDR) is an important component of intelligent maritime surveillance, which could be exploited to assist in judging whether the vessel is normally loaded or overloaded. The computer vision technique with an excellent price-to-performance ratio has become a popular medium to estimate vessel draft depth. However, the traditional estimation methods easily suffer from several limitations, such as sensitivity to low-quality images, high computational cost, etc. In this work, we propose a multi-task learning-enabled computational method (termed MTL-VDR) for generating highly reliable VDR. In particular, our MTL-VDR mainly consists of four components, i.e., draft mark detection, draft scale recognition, vessel/water segmentation, and final draft depth estimation. We first construct a benchmark dataset related to draft mark detection and employ a powerful and efficient convolutional neural network to accurately perform the detection task. The multi-task learning method is then proposed for simultaneous draft scale recognition and vessel/water segmentation. To obtain more robust VDR under complex conditions (e.g., damaged and stained scales, etc.), the accurate draft scales are generated by an automatic correction method, which is presented based on the spatial distribution rules of draft scales. Finally, an adaptive computational method is exploited to yield an accurate and robust draft depth. Extensive experiments have been implemented on the realistic dataset to compare our MTL-VDR with state-of-the-art methods. The results have demonstrated its superior performance in terms of accuracy, robustness, and efficiency. The computational speed exceeds 40 FPS, which satisfies the requirements of real-time maritime surveillance to guarantee vessel traffic safety.
Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality
Authors: Liyuan Wang, Jingyi Xie, Xingxing Zhang, Mingyi Huang, Hang Su, Jun Zhu
Abstract
Prompt-based continual learning is an emerging direction in leveraging pre-trained knowledge for downstream continual learning, and has almost reached the performance pinnacle under supervised pre-training. However, our empirical research reveals that the current strategies fall short of their full potential under the more realistic self-supervised pre-training, which is essential for handling vast quantities of unlabeled data in practice. This is largely due to the difficulty of task-specific knowledge being incorporated into instructed representations via prompt parameters and predicted by uninstructed representations at test time. To overcome the exposed sub-optimality, we conduct a theoretical analysis of the continual learning objective in the context of pre-training, and decompose it into hierarchical components: within-task prediction, task-identity inference, and task-adaptive prediction. Following these empirical and theoretical insights, we propose Hierarchical Decomposition (HiDe-)Prompt, an innovative approach that explicitly optimizes the hierarchical components with an ensemble of task-specific prompts and statistics of both uninstructed and instructed representations, further with the coordination of a contrastive regularization strategy. Our extensive experiments demonstrate the superior performance of HiDe-Prompt and its robustness to pre-training paradigms in continual learning (e.g., up to 15.01% and 9.61% lead on Split CIFAR-100 and Split ImageNet-R, respectively). Our code is available at \url{https://github.com/thu-ml/HiDe-Prompt}.
AdaMesh: Personalized Facial Expressions and Head Poses for Speech-Driven 3D Facial Animation
Abstract
Speech-driven 3D facial animation aims at generating facial movements that are synchronized with the driving speech, which has been widely explored recently. Existing works mostly neglect the person-specific talking style in generation, including facial expression and head pose styles. Several works intend to capture the personalities by fine-tuning modules. However, limited training data leads to the lack of vividness. In this work, we propose AdaMesh, a novel adaptive speech-driven facial animation approach, which learns the personalized talking style from a reference video of about 10 seconds and generates vivid facial expressions and head poses. Specifically, we propose mixture-of-low-rank adaptation (MoLoRA) to fine-tune the expression adapter, which efficiently captures the facial expression style. For the personalized pose style, we propose a pose adapter by building a discrete pose prior and retrieving the appropriate style embedding with a semantic-aware pose style matrix without fine-tuning. Extensive experimental results show that our approach outperforms state-of-the-art methods, preserves the talking style in the reference video, and generates vivid facial animation. The supplementary video and code will be available at https://adamesh.github.io.
SAGE-ICP: Semantic Information-Assisted ICP
Authors: Jiaming Cui, Jiming Chen, Liang Li
Subjects: Robotics (cs.RO); Computer Vision and Pattern Recognition (cs.CV)
Abstract
Robust and accurate pose estimation in unknown environments is an essential part of robotic applications. We focus on LiDAR-based point-to-point ICP combined with effective semantic information. This paper proposes a novel semantic information-assisted ICP method named SAGE-ICP, which leverages semantics in odometry. The semantic information for the whole scan is timely and efficiently extracted by a 3D convolution network, and these point-wise labels are deeply involved in every part of the registration, including semantic voxel downsampling, data association, adaptive local map, and dynamic vehicle removal. Unlike previous semantic-aided approaches, the proposed method can improve localization accuracy in large-scale scenes even if the semantic information has certain errors. Experimental evaluations on KITTI and KITTI-360 show that our method outperforms the baseline methods, and improves accuracy while maintaining real-time performance, i.e., runs faster than the sensor frame rate.
CacheGen: Fast Context Loading for Language Model Applications
Authors: Yuhan Liu, Hanchen Li, Kuntai Du, Jiayi Yao, Yihua Cheng, Yuyang Huang, Shan Lu, Michael Maire, Henry Hoffmann, Ari Holtzman, Ganesh Ananthanarayanan, Junchen Jiang
Subjects: Networking and Internet Architecture (cs.NI); Machine Learning (cs.LG)
Abstract
As large language models (LLMs) take on more complex tasks, their inputs incorporate longer contexts to respond to questions that require domain knowledge or user-specific conversational histories. Yet, using long contexts poses a challenge for responsive LLM systems, as nothing can be generated until all the contexts are fetched to and processed by the LLM. Existing systems optimize only the computation delay in context processing (e.g., by caching intermediate key-value features of the text context) but often cause longer network delays in context fetching (e.g., key-value features consume orders of magnitude larger bandwidth than the text context). This paper presents CacheGen to minimize the delays in fetching and processing contexts for LLMs. CacheGen reduces the bandwidth needed for transmitting long contexts' key-value (KV) features through a novel encoder that compresses KV features into more compact bitstream representations. The encoder combines adaptive quantization with a tailored arithmetic coder, taking advantage of the KV features' distributional properties, such as locality across tokens. Furthermore, CacheGen minimizes the total delay in fetching and processing a context by using a controller that determines when to load the context as compressed KV features or raw text and picks the appropriate compression level if loaded as KV features. We test CacheGen on three models of various sizes and three datasets of different context lengths. Compared to recent methods that handle long contexts, CacheGen reduces bandwidth usage by 3.7-4.3x and the total delay in fetching and processing contexts by 2.7-3x while maintaining similar LLM performance on various tasks as loading the text contexts.
Adaptive and Gamified Learning Paths with Polyglot and .NET Interactive
Abstract
The digital age is changing the role of educators and pushing for a paradigm shift in the education system as a whole. Growing demand for general and specialized education inside and outside classrooms is at the heart of this rising trend. In modern, heterogeneous learning environments, the one-size-fits-all approach is proven to be fundamentally flawed. Individualization through adaptivity is, therefore, crucial to nurture individual potential and address accessibility needs and neurodiversity. By formalizing a learning framework that takes into account all these different aspects, we aim to define and implement an open, content-agnostic, and extensible platform to design and consume adaptive and gamified learning experiences.
Guided Attention for Interpretable Motion Captioning
Abstract
While much effort has been invested in generating human motion from text, relatively few studies have been dedicated to the reverse direction, that is, generating text from motion. Much of the research focuses on maximizing generation quality without any regard for the interpretability of the architectures, particularly regarding the influence of particular body parts in the generation and the temporal synchronization of words with specific movements and actions. This study explores the combination of movement encoders with spatio-temporal attention models and proposes strategies to guide the attention during training to highlight perceptually pertinent areas of the skeleton in time. We show that adding guided attention with adaptive gate leads to interpretable captioning while improving performance compared to higher parameter-count non-interpretable SOTA systems. On the KIT MLD dataset, we obtain a BLEU@4 of 24.4% (SOTA+6%), a ROUGE-L of 58.30% (SOTA +14.1%), a CIDEr of 112.10 (SOTA +32.6) and a Bertscore of 41.20% (SOTA +18.20%). On HumanML3D, we obtain a BLEU@4 of 25.00 (SOTA +2.7%), a ROUGE-L score of 55.4% (SOTA +6.1%), a CIDEr of 61.6 (SOTA -10.9%), a Bertscore of 40.3% (SOTA +2.5%). Our code implementation and reproduction details will be soon available at https://github.com/rd20karim/M2T-Interpretable/tree/main.
Adaptive Distributionally Robust Planning for Renewable-Powered Fast Charging Stations Under Decision-Dependent EV Diffusion Uncertainty
Authors: Yujia Li, Feng Qiu, Chenxi Hu, Yunhe Hou
Subjects: Systems and Control (eess.SY); Optimization and Control (math.OC)
Abstract
When deploying fast charging stations (FCSs) to support long-distance trips of electric vehicles (EVs), there exist indirect network effects: while the gradual diffusion of EVs directly influences the timing and capacities of FCS allocation, the decisions for FCS allocations, in turn, impact the drivers' willingness to adopt EVs. This interplay, if neglected, can result in uncovered EVs and security issues on the grid side and even hinder the effective diffusion of EVs. In this paper, we explicitly incorporate this interdependence by quantifying EV adoption rates as decision-dependent uncertainties (DDUs) using decision-dependent ambiguity sets (DDASs). Then, a two-stage decision-dependent distributionally robust FCS planning (D$^3$R-FCSP) model is developed for adaptively deploying FCSs with on-site sources and expanding the coupled distribution network. A multi-period capacitated arc cover-path cover (MCACPC) model is incorporated to capture the EVs' recharging patterns to ensure the feasibility of FCS locations and capacities. To resolve the nonlinearity and nonconvexity, the D$^3$R-FCSP model is equivalently reformulated into a single-level mixed-integer linear programming by exploiting its strong duality and applying the McCormick envelope. Finally, case studies highlight the superior out-of-sample performances of our model in terms of security and cost-efficiency. Furthermore, the byproduct of accelerated EV adoption through an implicit positive feedback loop is highlighted.
Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages
Authors: Guozheng Ma, Lu Li, Sen Zhang, Zixuan Liu, Zhen Wang, Yixin Chen, Li Shen, Xueqian Wang, Dacheng Tao
Abstract
Plasticity, the ability of a neural network to evolve with new data, is crucial for high-performance and sample-efficient visual reinforcement learning (VRL). Although methods like resetting and regularization can potentially mitigate plasticity loss, the influences of various components within the VRL framework on the agent's plasticity are still poorly understood. In this work, we conduct a systematic empirical exploration focusing on three primary underexplored facets and derive the following insightful conclusions: (1) data augmentation is essential in maintaining plasticity; (2) the critic's plasticity loss serves as the principal bottleneck impeding efficient training; and (3) without timely intervention to recover critic's plasticity in the early stages, its loss becomes catastrophic. These insights suggest a novel strategy to address the high replay ratio (RR) dilemma, where exacerbated plasticity loss hinders the potential improvements of sample efficiency brought by increased reuse frequency. Rather than setting a static RR for the entire training process, we propose Adaptive RR, which dynamically adjusts the RR based on the critic's plasticity level. Extensive evaluations indicate that Adaptive RR not only avoids catastrophic plasticity loss in the early stages but also benefits from more frequent reuse in later phases, resulting in superior sample efficiency.
Imitation Learning from Observation with Automatic Discount Scheduling
Abstract
Humans often acquire new skills through observation and imitation. For robotic agents, learning from the plethora of unlabeled video demonstration data available on the Internet necessitates imitating the expert without access to its action, presenting a challenge known as Imitation Learning from Observations (ILfO). A common approach to tackle ILfO problems is to convert them into inverse reinforcement learning problems, utilizing a proxy reward computed from the agent's and the expert's observations. Nonetheless, we identify that tasks characterized by a progress dependency property pose significant challenges for such approaches; in these tasks, the agent needs to initially learn the expert's preceding behaviors before mastering the subsequent ones. Our investigation reveals that the main cause is that the reward signals assigned to later steps hinder the learning of initial behaviors. To address this challenge, we present a novel ILfO framework that enables the agent to master earlier behaviors before advancing to later ones. We introduce an Automatic Discount Scheduling (ADS) mechanism that adaptively alters the discount factor in reinforcement learning during the training phase, prioritizing earlier rewards initially and gradually engaging later rewards only when the earlier behaviors have been mastered. Our experiments, conducted on nine Meta-World tasks, demonstrate that our method significantly outperforms state-of-the-art methods across all tasks, including those that are unsolvable by them.
Faster Location in Combinatorial Interaction Testing
Authors: Ryan E. Dougherty, Dylan N. Green, Grace M. Kim
Abstract
Factors within a large-scale software system that simultaneously interact and strongly impact the system's response under a configuration are often difficult to identify. Although screening such a system for the existence of such interactions is important, determining their location is more useful for system engineers. Combinatorial interaction testing (CIT) concerns creation of test suites that nonadaptively either detect or locate the desired interactions, each of at most a specified size or show that no such set exists. Under the assumption that there are at most a given number of such interactions causing such a response, locating arrays (LAs) guarantee unique location for every such set of interactions and an algorithm to deal with outliers and nondeterministic behavior from real systems, we additionally require the LAs to have a "separation" between these collections. State-of-the-art approaches generate LAs that can locate at most one interaction of size at most three, due to the massive number of interaction combinations for larger parameters if no constraints are given. This paper presents LocAG, a two-stage algorithm that generates (unconstrained) LAs using a simple, but powerful partitioning strategy of these combinations. In particular, we are able to generate LAs with more factors, with any desired separation, and greater interaction size than existing approaches.
GMOCAT: A Graph-Enhanced Multi-Objective Method for Computerized Adaptive Testing
Abstract
Computerized Adaptive Testing(CAT) refers to an online system that adaptively selects the best-suited question for students with various abilities based on their historical response records. Most CAT methods only focus on the quality objective of predicting the student ability accurately, but neglect concept diversity or question exposure control, which are important considerations in ensuring the performance and validity of CAT. Besides, the students' response records contain valuable relational information between questions and knowledge concepts. The previous methods ignore this relational information, resulting in the selection of sub-optimal test questions. To address these challenges, we propose a Graph-Enhanced Multi-Objective method for CAT (GMOCAT). Firstly, three objectives, namely quality, diversity and novelty, are introduced into the Scalarized Multi-Objective Reinforcement Learning framework of CAT, which respectively correspond to improving the prediction accuracy, increasing the concept diversity and reducing the question exposure. We use an Actor-Critic Recommender to select questions and optimize three objectives simultaneously by the scalarization function. Secondly, we utilize the graph neural network to learn relation-aware embeddings of questions and concepts. These embeddings are able to aggregate neighborhood information in the relation graphs between questions and concepts. We conduct experiments on three real-world educational datasets, and show that GMOCAT not only outperforms the state-of-the-art methods in the ability prediction, but also achieve superior performance in improving the concept diversity and alleviating the question exposure. Our code is available at https://github.com/justarter/GMOCAT.
Quality of Service-Constrained Online Routing in High Throughput Satellites
Authors: Olivier Bélanger, Olfa Ben Yahia, Stéphane Martel, Antoine Lesage-Landry, Gunes Karabulut Kurt
Subjects: Networking and Internet Architecture (cs.NI); Signal Processing (eess.SP)
Abstract
High Throughput Satellites (HTSs) outpace traditional satellites due to their multi-beam transmission. The rise of low Earth orbit mega constellations amplifies HTS data rate demands to terabits/second with acceptable latency. This surge in data rate necessitates multiple modems, often exceeding single device capabilities. Consequently, satellites employ several processors, forming a complex packet-switch network. This can lead to potential internal congestion and challenges in adhering to strict quality of service (QoS) constraints. While significant research exists on constellation-level routing, a literature gap remains on the internal routing within a singular HTS. The intricacy of this internal network architecture presents a significant challenge to achieve high data rates. This paper introduces an online optimal flow allocation and scheduling method for HTSs. The problem is treated as a multi-commodity flow instance with different priority data streams. An initial full time horizon model is proposed as a benchmark. We apply a model predictive control (MPC) approach to enable adaptive routing based on current information and the forecast within the prediction time horizon while allowing for deviation of the latter. Importantly, MPC is inherently suited to handle uncertainty in incoming flows. Our approach minimizes packet loss by optimally and adaptively managing the priority queue schedulers and flow exchanges between satellite processing modules. Central to our method is a routing model focusing on optimal priority scheduling to enhance data rates and maintain QoS. The model's stages are critically evaluated, and results are compared to traditional methods via numerical simulations. Through simulations, our method demonstrates performance nearly on par with the hindsight optimum, showcasing its efficiency and adaptability in addressing satellite communication challenges.
An Explicit Local Space-Time Adaptive Framework for Monodomain Models
Authors: Dennis Ogiermann, Daniel Balzani, Luigi E. Perotti
Abstract
We present a new explicit local space-time adaptive framework to decrease the time required for monodomain simulations for cardiac electrophysiology. Based on the localized structure of the steep activation wavefront in solutions to monodomain problems, the proposed framework adopts small time steps and a tree-based adaptive mesh refinement scheme only in the regions necessary to resolve these localized structures. The time step and mesh adaptation selection process is fully controlled by a combination of local error indicators. The main contributions of this work consist in the introduction of a primal symmetric interior penalty formulation of the monodomain model and an efficient algorithmic strategy to manage local time stepping for its temporal discretization. In a first serial implementation of this framework, we report decreases in wall-clock time between 2 and 20 times with respect to an optimized implementation of a commonly used numerical scheme, showing that this framework is a promising candidate to accelerate monodomain simulations of cardiac electrophysiology.
MatFormer: Nested Transformer for Elastic Inference
Abstract
Transformer models are deployed in a wide range of settings, from multi-accelerator clusters to standalone mobile phones. The diverse inference constraints in these scenarios necessitate practitioners to train foundation models such as PaLM 2, Llama, & ViTs as a series of models of varying sizes. Due to significant training costs, only a select few model sizes are trained and supported, limiting more fine-grained control over relevant tradeoffs, including latency, cost, and accuracy. This work introduces MatFormer, a nested Transformer architecture designed to offer elasticity in a variety of deployment constraints. Each Feed Forward Network (FFN) block of a MatFormer model is jointly optimized with a few nested smaller FFN blocks. This training procedure allows for the Mix'n'Match of model granularities across layers -- i.e., a trained universal MatFormer model enables extraction of hundreds of accurate smaller models, which were never explicitly optimized. We empirically demonstrate MatFormer's effectiveness across different model classes (decoders & encoders), modalities (language & vision), and scales (up to 2.6B parameters). We find that a 2.6B decoder-only MatFormer language model (MatLM) allows us to extract smaller models spanning from 1.5B to 2.6B, each exhibiting comparable validation loss and one-shot downstream evaluations to their independently trained counterparts. Furthermore, we observe that smaller encoders extracted from a universal MatFormer-based ViT (MatViT) encoder preserve the metric-space structure for adaptive large-scale retrieval. Finally, we showcase that speculative decoding with the accurate and consistent submodels extracted from MatFormer can further reduce inference latency.
Keyword: quantization
RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems
Authors: Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Abstract
In practical cloud-edge scenarios, where a resource constrained edge performs data acquisition and a cloud system (having sufficient resources) performs inference tasks with a deep neural network (DNN), adversarial robustness is critical for reliability and ubiquitous deployment. Adversarial detection is a prime adversarial defence technique used in prior literature. However, in prior detection works, the detector is attached to the classifier model and both detector and classifier work in tandem to perform adversarial detection that requires a high computational overhead which is not available at the low-power edge. Therefore, prior works can only perform adversarial detection at the cloud and not at the edge. This means that in case of adversarial attacks, the unfavourable adversarial samples must be communicated to the cloud which leads to energy wastage at the edge device. Therefore, a low-power edge-friendly adversarial detection method is required to improve the energy efficiency of the edge and robustness of the cloud-based classifier. To this end, RobustEdge proposes Quantization-enabled Energy Separation (QES) training with "early detection and exit" to perform edge-based low cost adversarial detection. The QES-trained detector implemented at the edge blocks adversarial data transmission to the classifier model, thereby improving adversarial robustness and energy-efficiency of the Cloud-Edge system.
Distillation Improves Visual Place Recognition for Low-Quality Queries
Authors: Anbang Yang, Yao Wang, John-Ross Rizzo, Chen Feng
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Abstract
The shift to online computing for real-time visual localization often requires streaming query images/videos to a server for visual place recognition (VPR), where fast video transmission may result in reduced resolution or increased quantization. This compromises the quality of global image descriptors, leading to decreased VPR performance. To improve the low recall rate for low-quality query images, we present a simple yet effective method that uses high-quality queries only during training to distill better feature representations for deep-learning-based VPR, such as NetVLAD. Specifically, we use mean squared error (MSE) loss between the global descriptors of queries with different qualities, and inter-channel correlation knowledge distillation (ICKD) loss over their corresponding intermediate features. We validate our approach using the both Pittsburgh 250k dataset and our own indoor dataset with varying quantization levels. By fine-tuning NetVLAD parameters with our distillation-augmented losses, we achieve notable VPR recall-rate improvements over low-quality queries, as demonstrated in our extensive experimental results. We believe this work not only pushes forward the VPR research but also provides valuable insights for applications needing dependable place recognition under resource-limited conditions.
Sparse Finetuning for Inference Acceleration of Large Language Models
Authors: Eldar Kurtic, Denis Kuznedelev, Elias Frantar, Michael Goin, Dan Alistarh
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Abstract
We consider the problem of accurate sparse finetuning of large language models (LLMs), that is, finetuning pretrained LLMs on specialized tasks, while inducing sparsity in their weights. On the accuracy side, we observe that standard loss-based finetuning may fail to recover accuracy, especially at high sparsities. To address this, we perform a detailed study of distillation-type losses, determining an L2-based distillation approach we term SquareHead which enables accurate recovery even at higher sparsities, across all model types. On the practical efficiency side, we show that sparse LLMs can be executed with speedups by taking advantage of sparsity, for both CPU and GPU runtimes. While the standard approach is to leverage sparsity for computational reduction, we observe that in the case of memory-bound LLMs sparsity can also be leveraged for reducing memory bandwidth. We exhibit end-to-end results showing speedups due to sparsity, while recovering accuracy, on T5 (language translation), Whisper (speech translation), and open GPT-type (MPT for text generation). For MPT text generation, we show for the first time that sparse finetuning can reach 75% sparsity without accuracy drops, provide notable end-to-end speedups for both CPU and GPU inference, and highlight that sparsity is also compatible with quantization approaches. Models and software for reproducing our results are provided in Section 6.
Motion Vector-Domain Video Steganalysis Exploiting Skipped Macroblocks
Authors: Jun Li, Minqing Zhang, Ke Niu, Yingnan Zhang, Xiaoyuan Yang
Subjects: Multimedia (cs.MM); Cryptography and Security (cs.CR)
Abstract
Video steganography has the potential to be used to convey illegal information, and video steganalysis is a vital tool to detect the presence of this illicit act. Currently, all the motion vector (MV)-based video steganalysis algorithms extract feature sets directly on the MVs, but ignoring the steganograhic operation may perturb the statistics distribution of other video encoding elements, such as the skipped macroblocks (no direct MVs). This paper proposes a novel 11-dimensional feature set to detect MV-based video steganography based on the above observation. The proposed feature is extracted based on the skipped macroblocks by recompression calibration. Specifically, the feature consists of two components. The first is the probability distribution of motion vector prediction (MVP) difference, and the second is the probability distribution of partition state transfer. Extensive experiments on different conditions demonstrate that the proposed feature set achieves good detection accuracy, especially in lower embedding capacity. In addition, the loss of detection performance caused by recompression calibration using mismatched quantization parameters (QP) is within the acceptable range, so the proposed method can be used in practical scenarios.
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources
Abstract
Large Language Models (LLMs) have showcased remarkable impacts across a wide spectrum of natural language processing tasks. Fine-tuning these pre-trained models on downstream datasets provides further significant performance gains, but this process has been challenging due to its extraordinary resource requirements. To this end, existing efforts focus on parameter-efficient fine-tuning, which, unfortunately, fail to capitalize on the powerful potential of full-parameter fine-tuning. In this work, we propose QFT, a novel Quantized Full-parameter Tuning framework for LLMs that enables memory-efficient fine-tuning without harming performance. Our framework incorporates two novel ideas: (i) we adopt the efficient Lion optimizer, which only keeps track of the momentum and has consistent update magnitudes for each parameter, an inherent advantage for robust quantization; and (ii) we quantize all model states and store them as integer values, and present a gradient flow and parameter update scheme for the quantized weights. As a result, QFT reduces the model state memory to 21% of the standard solution while achieving comparable performance, e.g., tuning a LLaMA-7B model requires only <30GB of memory, satisfied by a single A6000 GPU.
CacheGen: Fast Context Loading for Language Model Applications
Authors: Yuhan Liu, Hanchen Li, Kuntai Du, Jiayi Yao, Yihua Cheng, Yuyang Huang, Shan Lu, Michael Maire, Henry Hoffmann, Ari Holtzman, Ganesh Ananthanarayanan, Junchen Jiang
Subjects: Networking and Internet Architecture (cs.NI); Machine Learning (cs.LG)
Abstract
As large language models (LLMs) take on more complex tasks, their inputs incorporate longer contexts to respond to questions that require domain knowledge or user-specific conversational histories. Yet, using long contexts poses a challenge for responsive LLM systems, as nothing can be generated until all the contexts are fetched to and processed by the LLM. Existing systems optimize only the computation delay in context processing (e.g., by caching intermediate key-value features of the text context) but often cause longer network delays in context fetching (e.g., key-value features consume orders of magnitude larger bandwidth than the text context). This paper presents CacheGen to minimize the delays in fetching and processing contexts for LLMs. CacheGen reduces the bandwidth needed for transmitting long contexts' key-value (KV) features through a novel encoder that compresses KV features into more compact bitstream representations. The encoder combines adaptive quantization with a tailored arithmetic coder, taking advantage of the KV features' distributional properties, such as locality across tokens. Furthermore, CacheGen minimizes the total delay in fetching and processing a context by using a controller that determines when to load the context as compressed KV features or raw text and picks the appropriate compression level if loaded as KV features. We test CacheGen on three models of various sizes and three datasets of different context lengths. Compared to recent methods that handle long contexts, CacheGen reduces bandwidth usage by 3.7-4.3x and the total delay in fetching and processing contexts by 2.7-3x while maintaining similar LLM performance on various tasks as loading the text contexts.
Keyword: efficient
Hyperdimensional Computing as a Rescue for Efficient Privacy-Preserving Machine Learning-as-a-Service
Malware Classification using Deep Neural Networks: Performance Evaluation and Applications in Edge Devices
Performance Analysis of Various EfficientNet Based U-Net++ Architecture for Automatic Building Extraction from High Resolution Satellite Images
DeepTriNet: A Tri-Level Attention Based DeepLabv3+ Architecture for Semantic Segmentation of Satellite Images
Open SYCL on heterogeneous GPU systems: A case of study
Flood and Echo: Algorithmic Alignment of GNNs with Distributed Computing
Efficient Path Planning in Large Unknown Environments with Switchable System Models for Automated Vehicles
Ultima: Robust and Tail-Optimal AllReduce for Distributed Deep Learning in the Cloud
CarDS-Plus ECG Platform: Development and Feasibility Evaluation of a Multiplatform Artificial Intelligence Toolkit for Portable and Wearable Device Electrocardiograms
Neural Relational Inference with Fast Modular Meta-learning
A predict-and-optimize approach to profit-driven churn prevention
An efficient saddle search method for ordered phase transitions involving translational invariance
The impact when neural min-sum variants meet ordered statistics decoding of LDPC codes
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources
Operating-Envelopes-Aware Decentralized Welfare Maximization for Energy Communities
$pκ$-Curves: Interpolatory curves with curvature approximating a parabola
DeepSimHO: Stable Pose Estimation for Hand-Object Interaction via Physics Simulation
Multi-Task Learning-Enabled Automatic Vessel Draft Reading for Intelligent Maritime Surveillance
Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes
Enhancing Neural Architecture Search with Multiple Hardware Constraints for Deep Learning Model Deployment on Tiny IoT Devices
Are GATs Out of Balance?
AdaMesh: Personalized Facial Expressions and Head Poses for Speech-Driven 3D Facial Animation
SAGE-ICP: Semantic Information-Assisted ICP
Optimizing the Placement of Roadside LiDARs for Autonomous Driving
Deep ReLU networks and high-order finite element methods II: Chebyshev emulation
Distilling Efficient Vision Transformers from CNNs for Semantic Segmentation
An Analysis on Large Language Models in Healthcare: A Case Study of BioBERT
Revisiting Android App Categorization
Score Regularized Policy Optimization through Diffusion Behavior
Molecule-Edit Templates for Efficient and Accurate Retrosynthesis Prediction
A webcam-based machine learning approach for three-dimensional range of motion evaluation
Multichannel consecutive data cross-extraction with 1DCNN-attention for diagnosis of power transformer
An Empirical Study of Instruction-tuning Large Language Models in Chinese
Choosing optimal parameters for a distributed multi-constrained QoS routing
composite' metric, and then apply one-to-all shortest path algorithms, e.g. Dijkstra, in order to find shortest path trees. We show that, in general, even if a feasible path exists and is known for every source and destination pair, it is impossible to guarantee a distributed routing under several constraints. We also study the question of choosing the optimal
composite' metric. We show that under certain mathematical assumptions we can efficiently find a convex combination of several metrics that maximizes the number of discovered feasible paths. Sometimes it can be done analytically, and is in general possible using what we call a 'smart iterative approach'. We illustrate these findings by extensive experiments on several typical network topologies.Improved Analysis of Sparse Linear Regression in Local Differential Privacy Model
LESS-Map: Lightweight and Evolving Semantic Map in Parking Lots for Long-term Self-Localization
Deep Kernel and Image Quality Estimators for Optimizing Robotic Ultrasound Controller using Bayesian Optimization
IRS Assisted Federated Learning A Broadband Over-the-Air Aggregation Approach
A Novel Voronoi-based Convolutional Neural Network Framework for Pushing Person Detection in Crowd Videos
Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages
Analytical Die-to-Die 3D Placement with Bistratal Wirelength Model and GPU Acceleration
Distance-based Weighted Transformer Network for Image Completion
Efficient machine-learning surrogates for large-scale geological carbon and energy storage
Spike-time encoding of gas concentrations using neuromorphic analog sensory front-end
Multimodal Graph Learning for Generative Tasks
Solving Semi-Discrete Optimal Transport Problems: star shapedeness and Newton's method
Model-based Clustering of Individuals' Ecological Momentary Assessment Time-series Data for Improving Forecasting Performance
Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Human-Centered Evaluation of XAI Methods
Building hierarchies of semiclassical Jacobi polynomials for spectral methods in annuli
Third order tensor-oriented directional splitting for exponential integrators
In-Context Unlearning: Language Models as Few Shot Unlearners
Qlarify: Bridging Scholarly Abstracts and Papers with Recursively Expandable Summaries
Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models
Transformers for Green Semantic Communication: Less Energy, More Semantics
Prospective Side Information for Latent MDPs
An Explicit Local Space-Time Adaptive Framework for Monodomain Models
AG-CVG: Coverage Planning with a Mobile Recharging UGV and an Energy-Constrained UAV
The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values
Differentiable Euler Characteristic Transforms for Shape Classification
Prompt Backdoors in Visual Prompt Learning
Polytopal discontinuous Galerkin discretization of brain multiphysics flow dynamics
Hybrid System Stability Analysis of Multi-Lane Mixed-Autonomy Traffic
DiPmark: A Stealthy, Efficient and Resilient Watermark for Large Language Models
Keyword: faster
A quantum annealing-sequential quadratic programming assisted finite element simulation for non-linear and history-dependent mechanical problems
Ultima: Robust and Tail-Optimal AllReduce for Distributed Deep Learning in the Cloud
SAGE-ICP: Semantic Information-Assisted ICP
GraphControl: Adding Conditional Control to Universal Graph Pre-trained Models for Graph Domain Transfer Learning
Automatic Control of Reactive Brain Computer Interfaces
Building hierarchies of semiclassical Jacobi polynomials for spectral methods in annuli
Approximating Subset Sum Ratio faster than Subset Sum
Keyword: mobile
Extended Reality via Cooperative NOMA in Hybrid Cloud/Mobile-Edge Computing Networks
CarDS-Plus ECG Platform: Development and Feasibility Evaluation of a Multiplatform Artificial Intelligence Toolkit for Portable and Wearable Device Electrocardiograms
Pre-Trained Masked Image Model for Mobile Robot Navigation
Automatic Macro Mining from Interaction Traces at Scale
Rate Adaptation Aware Positioning for Flying Gateways using Reinforcement Learning
Secure Decentralized Learning with Blockchain
CrashTranslator: Automatically Reproducing Mobile Application Crashes Directly from Stack Trace
Integrated Sensing and Communication enabled Multiple Base Stations Cooperative Sensing Towards 6G
Textiverse: A Scalable Visual Analytics System for Exploring Geotagged and Timestamped Text Corpora
Integrated Sensing and Communication Neighbor Discovery for MANET with Gossip Mechanism
PoRF: Pose Residual Field for Accurate Neural Surface Reconstruction
S4C: Self-Supervised Semantic Scene Completion with Neural Fields
AG-CVG: Coverage Planning with a Mobile Recharging UGV and an Energy-Constrained UAV
MatFormer: Nested Transformer for Elastic Inference
Keyword: pruning
SparseCoder: Advancing Source Code Analysis with Sparse Attention and Learned Token Pruning
Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Keyword: diffusion
Monsters in the Dark: Sanitizing Hidden Threats with Diffusion Models
ObjectComposer: Consistent Generation of Multiple Objects Without Fine-tuning
Investigating the Adversarial Robustness of Density Estimation Using the Probability Flow ODE
A new mixed finite element method for arbitrary element pair for a quasi-static nonlinear permeability thermo-poroelasticity model
Denoising Task Routing for Diffusion Models
Imitation Learning from Purified Demonstration
State of the Art on Diffusion Models for Visual Computing
Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes
Uni-paint: A Unified Framework for Multimodal Image Inpainting with Pretrained Diffusion Model
Score Regularized Policy Optimization through Diffusion Behavior
WiGenAI: The Symphony of Wireless and Generative AI via Diffusion Models
Adaptive Distributionally Robust Planning for Renewable-Powered Fast Charging Stations Under Decision-Dependent EV Diffusion Uncertainty
Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing Else
Boosting Black-box Attack to Deep Neural Networks with Conditional Diffusion Models
Third order tensor-oriented directional splitting for exponential integrators
Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models
ConditionVideo: Training-Free Condition-Guided Text-to-Video Generation
ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with Diffusion Models
Keyword: adaptive
Deep Learning-Based Real-Time Rate Control for Live Streaming on Wireless Networks
SAILing CAVs: Speed-Adaptive Infrastructure-Linked Connected and Automated Vehicles
Ultima: Robust and Tail-Optimal AllReduce for Distributed Deep Learning in the Cloud
Barrier States Theory for Safety-Critical Multi-Objective Control
A Digital Twin Approach for Adaptive Compliance in Cyber-Physical Systems: Case of Smart Warehouse Logistics
Off-Policy Evaluation for Human Feedback
Adaptive Gating in Mixture-of-Experts based Language Models
Multi-Task Learning-Enabled Automatic Vessel Draft Reading for Intelligent Maritime Surveillance
Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality
AdaMesh: Personalized Facial Expressions and Head Poses for Speech-Driven 3D Facial Animation
SAGE-ICP: Semantic Information-Assisted ICP
CacheGen: Fast Context Loading for Language Model Applications
Adaptive and Gamified Learning Paths with Polyglot and .NET Interactive
Guided Attention for Interpretable Motion Captioning
Adaptive Distributionally Robust Planning for Renewable-Powered Fast Charging Stations Under Decision-Dependent EV Diffusion Uncertainty
Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages
Imitation Learning from Observation with Automatic Discount Scheduling
Faster Location in Combinatorial Interaction Testing
GMOCAT: A Graph-Enhanced Multi-Objective Method for Computerized Adaptive Testing
Quality of Service-Constrained Online Routing in High Throughput Satellites
An Explicit Local Space-Time Adaptive Framework for Monodomain Models
MatFormer: Nested Transformer for Elastic Inference
Keyword: quantization
RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems
Distillation Improves Visual Place Recognition for Low-Quality Queries
Sparse Finetuning for Inference Acceleration of Large Language Models
Motion Vector-Domain Video Steganalysis Exploiting Skipped Macroblocks
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources
CacheGen: Fast Context Loading for Language Model Applications